MESH STRUCTURE
- subroutine mesh_update() performs refinement/derefinement of blocks
using new structures of meta and data blocks; it is implemented for
to be MPI friendly as well
- update subroutine derefine_block() to use new structures of meta and
data blocks
- update subroutines restrict_block() and prolong_block() to use new
meta and data structures
BOUNDARY CONDITIONS
- using a list of info blocks the implementation of boundary conditions
for neighbor blocks laying on different processors is successfully
completed; it is simpler and more clear than the previous
implementation
BOUNDARY CONDITIONS
- blocks belonging to the same process can exchange boundaries now;
blocks with neighbors from other processes still need to be updated
BLOCK STRUCTURE
- a new structure block_info has been added which will be used for
defining lists of blocks which have neighbors belonging to other
processes; these lists will be later used to exchange block data
BLOCK STRUCTURE: MPI
- divide initial block structure between all processors and remove
non-local data blocks
DATA I/O
- dump the total number of blocks and the number of data blocks
BOUNDARY CONDITIONS
- initial work on the boundary condition for new structure of blocks
MESH STRUCTURE
- with new meta and data block structures the refinement of a block
works correctly now
- increase and decrease the number of blocks while allocated and
dealocated, respectively
MESH STRUCTURE
- add subroutines metablock_setleaf(), metablock_unsetleaf() to set and
unset the leaf flag of a meta block
- add subroutine metablock_setconfig() to set the config of a block
- add subroutine metablock_setlevel() to set the refinement level of a
block
MESH STRUCTURE
- add subroutine associate_blocks() to associate a pair of meta and
data blocks
- add subroutine datablock_setbounds() to set the geometry bounds for a
given block
MESH STRUCTURE
- add subroutines allocate_metablock() and allocate_datablock()
allocating new meta and data blocks in memory
- add subroutines append_metablock() and append_datablock() appending
the allocated block to the meta and data block lists
- create root meta and data blocks in domain_default()
- create two new pointers last_meta nad last_data pointing to the last
blocks in the meta and data block lists
MESH STRUCTURE
- initialize meta and data block lists in init_blocks()
- deallocate and nullify meta and data block lists in clear_blocks()
- new subtourine deallocate_metablock() for meta block deallocation
- new subroutine deallocate_datablock() for data block deallocation
MESH STRUCTURE
- two types of structures are created:
block_meta - structure storing metadata of each block
block_data - structure storing geometry and variable information
- two lists corresponding to stored objects of block_meta and
block_data:
list_meta - the list of objects of block_meta type is shared and
synchronized between processors
list_data - the list of objects of block_data type is private for
each processors
- corresponding pointers to both structures are created too
REFINEMENT
- move the bounds for the criterion calculation by one cell out of the
domain, so we catch the refinement before it gradients enter the
block
MAINTANCE
- remove junk code from the update_mesh subroutine
The initial block mesh refinement seems to be optimized now. However,
more work in the MPI version is required, e.g. refine blocks on all CPUs
until the number of blocks exceeds some number dependent on the number
of CPUs. From that point refine independently on each CPU.
The initial work on the mesh update has been done too, but the processes
do not exchange block information.
This next approach to select blocks for refinement/derefinement with MPI
seems to be working. Moreover, it is simple and efficient, however,
there might be still some room for improvements.
It seems to be pretty complex. So far I collect blocks selected for
refinement which have neighbors laying on other processors. Using this I
set the neighbors from other processors for refinement if required. The
remaining things are the neighbors update after refinement and MPI
version of derefinement.
The boundary conditions finally works when using MPI with arbitrary
number of processors. This is however a dirty hack. Has to be done in a
better way later, since now it may be very tricky to generalize the code
to 3D.
The next time step should be reduced to the minimum value over all
processes. This has been added in subroutine 'evolution' now.
The block structure contains a new field %pos which specifies the
position of the child block in its parent.
In this version first we determine the number of blocks to exchange for
each process, then we update this across all processes. Using this
information we send the IDs and level of blocks and their data to the
neighboring processes.
In this version we send each block only once. This block is used to
update boundaries of all neighbors. We use three communications between
each block: the number of blocks to send, the IDs and levels of blocks
which are send, and the %u field of all blocks which are send packed as
one big buffer 'rbuf'.
Now, in the boundary subroutine we collect information about the blocks
which neighbors lay on a different processor. Using this information we
send neighboring blocks and update the block boundaries. This is the
initial implementation, not very well optimized and containg some bugs.
There are several conditions for the boundaries. If the neighbor block
is on the same processor do as usuall. If it lays on a different
processor, prepare it for exchange. After echanging blocks we need to
perform similar update as it would lay on the same processor, i.e. we
need to perform copying, restriction, or prolongation.
The blocks of initial mesh are now divided between all processes. The
procedure updates also the %cpu field of all neighbors according to the
number of process they are running at.
The subroutine 'deallocate_block' takes care about the plist pointer
now. It also allows for the flexible removal of any block, even from the
middle of the list by updating fields %next and %prev of the previous
and next blocks, respectively.
In addition, we now store the %cpu field of each block in the HDF5 files
too.
The domain initialization is performed in the problem.F90 now. This will
allow a user to initialize his own domain according to the defined
problem. Since the initialization in problem-dependent we don't need the
subroutine allocate_blocks anymore.
A new 'mpitools' module has been added. This module contains subroutines
to initialize, deinitialize and handle MPI communication. This initial
version can now initialize and deinitialize parallelization. It does not
support full parallelization yet.
We use the field 'neigh' instead of the field 'pneigh'. The field
'neigh' is of the type of 'blockref' which simplifies the reference to
the neighbors blocks when using MPI. Each reference contains two fields:
cpu and id. In MPI version we cannot use pointers stored in pneigh, but
using blockref structures we can refer to the blocks belonging to other
processors.
The new type 'blockref' is a reference to a block by two integer
numbers, the first points to the cpu that block is running, and the
second one is the block identificator.
Now, we can define any shape for a given problem inside the domain,
which is not updated during the evolution. This allows for using the
sources of any kind in the problem studies, such as the colliding winds
in the binary stars.
Now the subroutine init_problem call the initialization os selected
problem determined by the config parameter 'problem'. In this way we can
choose problem without recompiling the code.
Now we can control the levels of refinement/derefinement criterion from
the config file. The bounds of the domains are stored in the HDF5 files
from now.