17 Commits

Author SHA1 Message Date
Grzegorz Kowal
8076970c22 Update copyright headers with the new email. 2011-05-05 18:37:53 -03:00
Grzegorz Kowal
fc32225dc1 Change the license to GNU General Public License, version 2. 2011-04-29 11:21:30 -03:00
Grzegorz Kowal
0ff433e16b Rename the code from Godunov-AMR to AMUN. 2011-04-25 13:44:34 -03:00
Grzegorz Kowal
cfb35e7cd9 MPI: implement reduction of a complex array.
- reduce the velocity Fourier components from all processors;
2011-03-07 00:28:58 -03:00
Grzegorz Kowal
7d7a0b7e32 MPI: add subroutine to reduce double precision array. 2011-03-02 14:56:22 -03:00
Grzegorz Kowal
2682d37d20 Update the copyright information. 2011-02-27 22:45:54 -03:00
Grzegorz Kowal
59db37fbae RANDOM: add random number generator module.
- add a new module RANDOM which handles initialization and generation
   of different type random number distributions;
 - include the module in the compilation process;
 - initialize the random generator from driver;
2010-12-08 11:40:38 -02:00
Grzegorz Kowal
4c2f649153 MPI: add new subroutine mallreducemaxr().
- the new subroutine mallreducemaxr() collects the value from all
   processors, finds the maximum over them and redistributes again to
   all processors;
2010-12-03 17:58:20 -02:00
Grzegorz Kowal
aab8831bbf Update the copyright info and fix the length of separating lines. 2010-10-13 03:32:10 -03:00
Grzegorz Kowal
3c74c9a8ea More work on the block refinement.
The initial block mesh refinement seems to be optimized now. However,
more work in the MPI version is required, e.g. refine blocks on all CPUs
until the number of blocks exceeds some number dependent on the number
of CPUs. From that point refine independently on each CPU.

The initial work on the mesh update has been done too, but the processes
do not exchange block information.
2009-05-18 22:46:19 +02:00
Grzegorz Kowal
1b718dea86 Selecting blocks for refinement/derefinement with MPI.
This next approach to select blocks for refinement/derefinement with MPI
seems to be working. Moreover, it is simple and efficient, however,
there might be still some room for improvements.
2009-01-08 00:08:52 -06:00
Grzegorz Kowal
c686e301fc Fix boundary update and reduce the next time step over all CPUs.
The boundary conditions finally works when using MPI with arbitrary
number of processors. This is however a dirty hack. Has to be done in a
better way later, since now it may be very tricky to generalize the code
to 3D.

The next time step should be reduced to the minimum value over all
processes. This has been added in subroutine 'evolution' now.

The block structure contains a new field %pos which specifies the
position of the child block in its parent.
2009-01-02 20:18:57 -06:00
Grzegorz Kowal
53d1a2e23d A bit better approach to update boundaries with MPI.
In this version first we determine the number of blocks to exchange for
each process, then we update this across all processes. Using this
information we send the IDs and level of blocks and their data to the
neighboring processes.
2009-01-01 23:17:49 -06:00
Grzegorz Kowal
240f63d7b1 Reorganize and optimize block exchange in boundary update.
In this version we send each block only once. This block is used to
update boundaries of all neighbors. We use three communications between
each block: the number of blocks to send, the IDs and levels of blocks
which are send, and the %u field of all blocks which are send packed as
one big buffer 'rbuf'.
2008-12-31 16:07:23 -06:00
Grzegorz Kowal
6c620a97cf Implement boundary block exchange between processes.
Now, in the boundary subroutine we collect information about the blocks
which neighbors lay on a different processor. Using this information we
send neighboring blocks and update the block boundaries. This is the
initial implementation, not very well optimized and containg some bugs.
2008-12-31 12:02:36 -06:00
Grzegorz Kowal
1f581e9610 Add new module for MPI handling.
A new 'mpitools' module has been added. This module contains subroutines
to initialize, deinitialize and handle MPI communication. This initial
version can now initialize and deinitialize parallelization. It does not
support full parallelization yet.
2008-12-22 14:57:31 -06:00
Grzegorz Kowal
d67f05d650 Add a new module for MPI communication. 2008-12-19 17:33:29 -06:00