The boundary conditions finally works when using MPI with arbitrary
number of processors. This is however a dirty hack. Has to be done in a
better way later, since now it may be very tricky to generalize the code
to 3D.
The next time step should be reduced to the minimum value over all
processes. This has been added in subroutine 'evolution' now.
The block structure contains a new field %pos which specifies the
position of the child block in its parent.
In this version first we determine the number of blocks to exchange for
each process, then we update this across all processes. Using this
information we send the IDs and level of blocks and their data to the
neighboring processes.
In this version we send each block only once. This block is used to
update boundaries of all neighbors. We use three communications between
each block: the number of blocks to send, the IDs and levels of blocks
which are send, and the %u field of all blocks which are send packed as
one big buffer 'rbuf'.
Now, in the boundary subroutine we collect information about the blocks
which neighbors lay on a different processor. Using this information we
send neighboring blocks and update the block boundaries. This is the
initial implementation, not very well optimized and containg some bugs.
There are several conditions for the boundaries. If the neighbor block
is on the same processor do as usuall. If it lays on a different
processor, prepare it for exchange. After echanging blocks we need to
perform similar update as it would lay on the same processor, i.e. we
need to perform copying, restriction, or prolongation.
The blocks of initial mesh are now divided between all processes. The
procedure updates also the %cpu field of all neighbors according to the
number of process they are running at.
The subroutine 'deallocate_block' takes care about the plist pointer
now. It also allows for the flexible removal of any block, even from the
middle of the list by updating fields %next and %prev of the previous
and next blocks, respectively.
In addition, we now store the %cpu field of each block in the HDF5 files
too.
The domain initialization is performed in the problem.F90 now. This will
allow a user to initialize his own domain according to the defined
problem. Since the initialization in problem-dependent we don't need the
subroutine allocate_blocks anymore.
A new 'mpitools' module has been added. This module contains subroutines
to initialize, deinitialize and handle MPI communication. This initial
version can now initialize and deinitialize parallelization. It does not
support full parallelization yet.
We use the field 'neigh' instead of the field 'pneigh'. The field
'neigh' is of the type of 'blockref' which simplifies the reference to
the neighbors blocks when using MPI. Each reference contains two fields:
cpu and id. In MPI version we cannot use pointers stored in pneigh, but
using blockref structures we can refer to the blocks belonging to other
processors.
The new type 'blockref' is a reference to a block by two integer
numbers, the first points to the cpu that block is running, and the
second one is the block identificator.
Now, we can define any shape for a given problem inside the domain,
which is not updated during the evolution. This allows for using the
sources of any kind in the problem studies, such as the colliding winds
in the binary stars.
Now the subroutine init_problem call the initialization os selected
problem determined by the config parameter 'problem'. In this way we can
choose problem without recompiling the code.
Now we can control the levels of refinement/derefinement criterion from
the config file. The bounds of the domains are stored in the HDF5 files
from now.
The generation of dx, dy, dz for each level didn't include the base
grid, i.e. they were 2 times larger. Now it is fixed. Also plenty of
minor corrections in different places.
It looks like the refining and derefining work more or less, at least
without interrupting the execution. Nevertheless, there are still some
artifacts, like the lack of symmetry after some time or not efficient
derefining of the mesh. This could be cause by the solver, however.
The refinement criterion is computed using pressure now.
Now all block update their boundary values from the neighboring blocks.
Blocks of the same level copy their values, but block of different
levels shrink or expand the boundary values from the neighbor,
depending on the level change. In order to complete this I've
implemented subroutines expand and interpolate in the interpolation
module.
By default the boundary conditions are set to periodic. So far only this
type of conditions is supported, but the other types should be easy to
implement.
The subroutine 'boundary' sweeps over all leaf blocks. For each block it
sweeps over its neighbors and performs update of the boundaries. This is
an initial version yet, it supports only neighboring blocks of the same
level of refinement.
The function to calculate the maximum speed in the block has been added.
This function is used to determine the maximum speed globally, which is
next used to estimate the next time step.
A new parameter to control the time interval between writing data has
been added. Also the condition to check if data should be written has
been added in the main loop.
This new module contains subroutines used to different kind of
interpolation. So far it is implemented only one subroutine used for
reconstruction of the left and right states from center values. This
interpolation has TVD property.
Two subroutines, cons2prim and prim2cons, to convert between
conservative and primitive variables have been added.
New parameters, gammam1 and gammam1i have been added to the
configuration file. Both are derivation of gamma, first is gamma-1, the
secons 1/(gamma-1).