The initial block mesh refinement seems to be optimized now. However,
more work in the MPI version is required, e.g. refine blocks on all CPUs
until the number of blocks exceeds some number dependent on the number
of CPUs. From that point refine independently on each CPU.
The initial work on the mesh update has been done too, but the processes
do not exchange block information.
This next approach to select blocks for refinement/derefinement with MPI
seems to be working. Moreover, it is simple and efficient, however,
there might be still some room for improvements.
It seems to be pretty complex. So far I collect blocks selected for
refinement which have neighbors laying on other processors. Using this I
set the neighbors from other processors for refinement if required. The
remaining things are the neighbors update after refinement and MPI
version of derefinement.
The boundary conditions finally works when using MPI with arbitrary
number of processors. This is however a dirty hack. Has to be done in a
better way later, since now it may be very tricky to generalize the code
to 3D.
The next time step should be reduced to the minimum value over all
processes. This has been added in subroutine 'evolution' now.
The block structure contains a new field %pos which specifies the
position of the child block in its parent.
The blocks of initial mesh are now divided between all processes. The
procedure updates also the %cpu field of all neighbors according to the
number of process they are running at.
The subroutine 'deallocate_block' takes care about the plist pointer
now. It also allows for the flexible removal of any block, even from the
middle of the list by updating fields %next and %prev of the previous
and next blocks, respectively.
In addition, we now store the %cpu field of each block in the HDF5 files
too.
The domain initialization is performed in the problem.F90 now. This will
allow a user to initialize his own domain according to the defined
problem. Since the initialization in problem-dependent we don't need the
subroutine allocate_blocks anymore.
A new 'mpitools' module has been added. This module contains subroutines
to initialize, deinitialize and handle MPI communication. This initial
version can now initialize and deinitialize parallelization. It does not
support full parallelization yet.
We use the field 'neigh' instead of the field 'pneigh'. The field
'neigh' is of the type of 'blockref' which simplifies the reference to
the neighbors blocks when using MPI. Each reference contains two fields:
cpu and id. In MPI version we cannot use pointers stored in pneigh, but
using blockref structures we can refer to the blocks belonging to other
processors.
The new type 'blockref' is a reference to a block by two integer
numbers, the first points to the cpu that block is running, and the
second one is the block identificator.
The generation of dx, dy, dz for each level didn't include the base
grid, i.e. they were 2 times larger. Now it is fixed. Also plenty of
minor corrections in different places.
It looks like the refining and derefining work more or less, at least
without interrupting the execution. Nevertheless, there are still some
artifacts, like the lack of symmetry after some time or not efficient
derefining of the mesh. This could be cause by the solver, however.
The refinement criterion is computed using pressure now.
The subroutine 'boundary' sweeps over all leaf blocks. For each block it
sweeps over its neighbors and performs update of the boundaries. This is
an initial version yet, it supports only neighboring blocks of the same
level of refinement.
The function to calculate the maximum speed in the block has been added.
This function is used to determine the maximum speed globally, which is
next used to estimate the next time step.
We compute dFdx contribution along each direction and update total dU.
Apart from that I've added generation of coordinate variables, like dx,
dxi, etc. in the mesh module. The next step is to implement the HLL
solver.
I replaced the pointer 'pfirst' pointing to the first element of the
list of blocks with 'plist'. I improved formatting of the informations
printed during the initial mesh generation.
I've added 'Z' configuration of the refined block, completed 'N', 'C',
'D', and 'U' configurations. Implemented pointers to the neighbors
correctly. Finally, I've implemented correct initial geometry
generation, by selection of the blocks to refine, their neightbors if of
the lower level, and finally performing the actual refinement.
Subroutine to refine a selected block has been implemented. This
subroutine creates 4 new blocks, and inserts them before the parent
block. It fills all block arguments like parent, neighbors, leaf flags,
bounds, next and previous pointers, etc.
Implemented the generation of initial blocks in N-configuration with
proper ataching to the list, pointer, neighbors and bounds
initialization, as well as the initial problem setup. Two new files has
been added: mesh.F90 - hadling the adaptive mesh structure, and
problem.F90 - handling the problem initialization.