MESH REFINEMENT, INITIAL GEOMETRY
- if there is only one block and it is set not to be refined, refine it
anyway to catch situations when the initial resolution is too small
for the problem initiation
MESH STRUCTURE, REFINEMENT
- subroutine prolong_block() has been completely rewritten; now it
supports 2D and 3D boxes, and the calculation of bounds for the
expanded array is automated now
MESH STRUCTURE, DEREFINEMENT
- subroutine restrict_block() has been completely rewritten; now it
supports 2D and 3D boxes, and the calculation of bounds for
source and destination arrays is automated
FIXES
- the field neigh of meta blocks structure must use ndims, nsides, and
nfaces variables
BLOCK STRUCTURE
- use pmeta and pdata names for ponters to meta and data blocks,
respectively; this helps reading the code
HOSTS
- turn on DEBUG
MESH STRUCTURE, MPI
- implement auto balancing, which resends the blocks to the neighboring
processors in order to keep the same number of data blocks on each
processor
MESH STRUCTURE, MPI
- correct the initial division of the geometry, so each process is
attached to the same number of blocks; blocks at the top level are
allowed to be attached to different processors even if they are
children of the same block
MESH STRUCTURE, MPI
- if the child blocks are selected to be derefined and lay on different
processors, move them all to the processor of the parent block
BLOCK STRUCTURE, MPI
- in the best division calculation use the number of data blocs only,
but if the blocks are at the top level and have the same parent
pack then together in the same process
MESH STRUCTURES
- since blocks which are not leafs are not used to calculate the fields
evolution, we don't need to keep their data blocks allocated; these
changes allocated the parent data block only for blocks which are
derefined, and deallocates data block of a block selected for
refinement; in the case of many blocks and many refinement levels
these modifications safe a substantial amount of memory
MESH STRUCTURE
- move the bounds of the block (xmin, xmax, ...) from data to the meta
structure; in this case we can deallocate data blocks without losing
the information about it bounds
PROBLEMS
- implement the blast problem in a similar way to the one implemented
in GODUNOV code
- generate initial domain of 2x3 blocks at the lowest level; it should
be generalized to any dimension of blocks
MESH STRUCTURE
- add new variable rdims(1:3) specifying dimensions of the initial
blocks structure at lowest level
- improve calculation of the effective resolution, and spatial
increments
MESH STRUCTURE
- subroutine mesh_update() performs refinement/derefinement of blocks
using new structures of meta and data blocks; it is implemented for
to be MPI friendly as well
- update subroutine derefine_block() to use new structures of meta and
data blocks
- update subroutines restrict_block() and prolong_block() to use new
meta and data structures
BOUNDARY CONDITIONS
- using a list of info blocks the implementation of boundary conditions
for neighbor blocks laying on different processors is successfully
completed; it is simpler and more clear than the previous
implementation
BOUNDARY CONDITIONS
- blocks belonging to the same process can exchange boundaries now;
blocks with neighbors from other processes still need to be updated
BLOCK STRUCTURE
- a new structure block_info has been added which will be used for
defining lists of blocks which have neighbors belonging to other
processes; these lists will be later used to exchange block data
BLOCK STRUCTURE: MPI
- divide initial block structure between all processors and remove
non-local data blocks
DATA I/O
- dump the total number of blocks and the number of data blocks
BOUNDARY CONDITIONS
- initial work on the boundary condition for new structure of blocks
MESH STRUCTURE
- with new meta and data block structures the refinement of a block
works correctly now
- increase and decrease the number of blocks while allocated and
dealocated, respectively
MESH STRUCTURE
- add subroutines metablock_setleaf(), metablock_unsetleaf() to set and
unset the leaf flag of a meta block
- add subroutine metablock_setconfig() to set the config of a block
- add subroutine metablock_setlevel() to set the refinement level of a
block
MESH STRUCTURE
- add subroutine associate_blocks() to associate a pair of meta and
data blocks
- add subroutine datablock_setbounds() to set the geometry bounds for a
given block
MESH STRUCTURE
- add subroutines allocate_metablock() and allocate_datablock()
allocating new meta and data blocks in memory
- add subroutines append_metablock() and append_datablock() appending
the allocated block to the meta and data block lists
- create root meta and data blocks in domain_default()
- create two new pointers last_meta nad last_data pointing to the last
blocks in the meta and data block lists
MESH STRUCTURE
- initialize meta and data block lists in init_blocks()
- deallocate and nullify meta and data block lists in clear_blocks()
- new subtourine deallocate_metablock() for meta block deallocation
- new subroutine deallocate_datablock() for data block deallocation
MESH STRUCTURE
- two types of structures are created:
block_meta - structure storing metadata of each block
block_data - structure storing geometry and variable information
- two lists corresponding to stored objects of block_meta and
block_data:
list_meta - the list of objects of block_meta type is shared and
synchronized between processors
list_data - the list of objects of block_data type is private for
each processors
- corresponding pointers to both structures are created too
REFINEMENT
- move the bounds for the criterion calculation by one cell out of the
domain, so we catch the refinement before it gradients enter the
block
MAINTANCE
- remove junk code from the update_mesh subroutine
The initial block mesh refinement seems to be optimized now. However,
more work in the MPI version is required, e.g. refine blocks on all CPUs
until the number of blocks exceeds some number dependent on the number
of CPUs. From that point refine independently on each CPU.
The initial work on the mesh update has been done too, but the processes
do not exchange block information.
This next approach to select blocks for refinement/derefinement with MPI
seems to be working. Moreover, it is simple and efficient, however,
there might be still some room for improvements.
It seems to be pretty complex. So far I collect blocks selected for
refinement which have neighbors laying on other processors. Using this I
set the neighbors from other processors for refinement if required. The
remaining things are the neighbors update after refinement and MPI
version of derefinement.
The boundary conditions finally works when using MPI with arbitrary
number of processors. This is however a dirty hack. Has to be done in a
better way later, since now it may be very tricky to generalize the code
to 3D.
The next time step should be reduced to the minimum value over all
processes. This has been added in subroutine 'evolution' now.
The block structure contains a new field %pos which specifies the
position of the child block in its parent.
In this version first we determine the number of blocks to exchange for
each process, then we update this across all processes. Using this
information we send the IDs and level of blocks and their data to the
neighboring processes.
In this version we send each block only once. This block is used to
update boundaries of all neighbors. We use three communications between
each block: the number of blocks to send, the IDs and levels of blocks
which are send, and the %u field of all blocks which are send packed as
one big buffer 'rbuf'.