MESH STRUCTURE
- add subroutines allocate_metablock() and allocate_datablock()
allocating new meta and data blocks in memory
- add subroutines append_metablock() and append_datablock() appending
the allocated block to the meta and data block lists
- create root meta and data blocks in domain_default()
- create two new pointers last_meta nad last_data pointing to the last
blocks in the meta and data block lists
MESH STRUCTURE
- initialize meta and data block lists in init_blocks()
- deallocate and nullify meta and data block lists in clear_blocks()
- new subtourine deallocate_metablock() for meta block deallocation
- new subroutine deallocate_datablock() for data block deallocation
MESH STRUCTURE
- two types of structures are created:
block_meta - structure storing metadata of each block
block_data - structure storing geometry and variable information
- two lists corresponding to stored objects of block_meta and
block_data:
list_meta - the list of objects of block_meta type is shared and
synchronized between processors
list_data - the list of objects of block_data type is private for
each processors
- corresponding pointers to both structures are created too
REFINEMENT
- move the bounds for the criterion calculation by one cell out of the
domain, so we catch the refinement before it gradients enter the
block
MAINTANCE
- remove junk code from the update_mesh subroutine
The initial block mesh refinement seems to be optimized now. However,
more work in the MPI version is required, e.g. refine blocks on all CPUs
until the number of blocks exceeds some number dependent on the number
of CPUs. From that point refine independently on each CPU.
The initial work on the mesh update has been done too, but the processes
do not exchange block information.
This next approach to select blocks for refinement/derefinement with MPI
seems to be working. Moreover, it is simple and efficient, however,
there might be still some room for improvements.
It seems to be pretty complex. So far I collect blocks selected for
refinement which have neighbors laying on other processors. Using this I
set the neighbors from other processors for refinement if required. The
remaining things are the neighbors update after refinement and MPI
version of derefinement.
The boundary conditions finally works when using MPI with arbitrary
number of processors. This is however a dirty hack. Has to be done in a
better way later, since now it may be very tricky to generalize the code
to 3D.
The next time step should be reduced to the minimum value over all
processes. This has been added in subroutine 'evolution' now.
The block structure contains a new field %pos which specifies the
position of the child block in its parent.
In this version first we determine the number of blocks to exchange for
each process, then we update this across all processes. Using this
information we send the IDs and level of blocks and their data to the
neighboring processes.
In this version we send each block only once. This block is used to
update boundaries of all neighbors. We use three communications between
each block: the number of blocks to send, the IDs and levels of blocks
which are send, and the %u field of all blocks which are send packed as
one big buffer 'rbuf'.
Now, in the boundary subroutine we collect information about the blocks
which neighbors lay on a different processor. Using this information we
send neighboring blocks and update the block boundaries. This is the
initial implementation, not very well optimized and containg some bugs.
There are several conditions for the boundaries. If the neighbor block
is on the same processor do as usuall. If it lays on a different
processor, prepare it for exchange. After echanging blocks we need to
perform similar update as it would lay on the same processor, i.e. we
need to perform copying, restriction, or prolongation.
The blocks of initial mesh are now divided between all processes. The
procedure updates also the %cpu field of all neighbors according to the
number of process they are running at.
The subroutine 'deallocate_block' takes care about the plist pointer
now. It also allows for the flexible removal of any block, even from the
middle of the list by updating fields %next and %prev of the previous
and next blocks, respectively.
In addition, we now store the %cpu field of each block in the HDF5 files
too.
The domain initialization is performed in the problem.F90 now. This will
allow a user to initialize his own domain according to the defined
problem. Since the initialization in problem-dependent we don't need the
subroutine allocate_blocks anymore.
A new 'mpitools' module has been added. This module contains subroutines
to initialize, deinitialize and handle MPI communication. This initial
version can now initialize and deinitialize parallelization. It does not
support full parallelization yet.
We use the field 'neigh' instead of the field 'pneigh'. The field
'neigh' is of the type of 'blockref' which simplifies the reference to
the neighbors blocks when using MPI. Each reference contains two fields:
cpu and id. In MPI version we cannot use pointers stored in pneigh, but
using blockref structures we can refer to the blocks belonging to other
processors.
The new type 'blockref' is a reference to a block by two integer
numbers, the first points to the cpu that block is running, and the
second one is the block identificator.
Now, we can define any shape for a given problem inside the domain,
which is not updated during the evolution. This allows for using the
sources of any kind in the problem studies, such as the colliding winds
in the binary stars.
Now the subroutine init_problem call the initialization os selected
problem determined by the config parameter 'problem'. In this way we can
choose problem without recompiling the code.
Now we can control the levels of refinement/derefinement criterion from
the config file. The bounds of the domains are stored in the HDF5 files
from now.
The generation of dx, dy, dz for each level didn't include the base
grid, i.e. they were 2 times larger. Now it is fixed. Also plenty of
minor corrections in different places.
It looks like the refining and derefining work more or less, at least
without interrupting the execution. Nevertheless, there are still some
artifacts, like the lack of symmetry after some time or not efficient
derefining of the mesh. This could be cause by the solver, however.
The refinement criterion is computed using pressure now.
Now all block update their boundary values from the neighboring blocks.
Blocks of the same level copy their values, but block of different
levels shrink or expand the boundary values from the neighbor,
depending on the level change. In order to complete this I've
implemented subroutines expand and interpolate in the interpolation
module.
By default the boundary conditions are set to periodic. So far only this
type of conditions is supported, but the other types should be easy to
implement.
The subroutine 'boundary' sweeps over all leaf blocks. For each block it
sweeps over its neighbors and performs update of the boundaries. This is
an initial version yet, it supports only neighboring blocks of the same
level of refinement.
The function to calculate the maximum speed in the block has been added.
This function is used to determine the maximum speed globally, which is
next used to estimate the next time step.