BLOCK STRUCTURE
- add new field pos in the meta block structure determining the
position of the current block in the parent block; this might be very
useful in determining which boundary region should be updated;
- add new subroutine metablock_set_position() which sets the position
of the child in the parent block;
VARIABLES
- create new module 'variables' which stores references to variable
indices; we gonna store dofferent objects related to variables in
this module;
BLOCK STRUCTURE
- the global variable nblocks tells how many metablocks are allocated,
thus its name is changed to mblocks;
- remove unnecessary checking of the list allocation;
BLOCK STRUCTURE
- add a new field 'coord' to store block coordinates in the units of
the effective resolution;
- add a subroutine metablock_set_coord() to set the block coordinates;
IO
- the subroutine write_data() is now a wrapper subroutine which call
the subroutine writing in a supported file format; the file format is
chosen at the time of compilation;
- the subroutine write_data_h5() is a new subroutine taking care of
initialization and storage data in the HDF5 format; depending on the
file type it calls subroutines to create specific groups and store
the right data;
- new subroutines write_atrributes_h5(), write_coordinates_h5(),
write_variables_h(), write_variables_full_h5(),
write_metablocks_h5(), write_datablocks_h5(), and number of other
supporting subroutines to store all data in the proper format for job
restart, vizualization, and debugging;
MAKE
- add option to enable/disable HDF5 file compression;
BLOCK STRUCTURE
- add new array to the data block structure to store the electromotive
force components; these components are located at the centers of cell
edges, in this way the CT update of the staggered magnetic field
component will be easier;
EVOLUTION
- add new subroutine advance() which performs several steps in order to
advance the solution in time by one-step update; the substeps are the
updates of the numerical flux, the flux boundary, the advance in time
the solution, the updates of mesh structure, and the boundaries of
conserved variables, and finally the new time step estimation;
- add new subroutine update_flux() to update the numerical fluxes
stored in the data blocks;
SCHEME
- add new subroutine numerical_flux() to calculate fluxes at the proper
locations;
- add new logical argument to HLL and HLLC in order to specify if the
flux should be returned as a numerical flux or its derivative;
BLOCK STRUCTURE
- add a multidimensional array to store the numerical fluxes in the
block structure; it seems that this is necessary for the boundary
update, since the fluxes should be synchronized between blocks at
different levels;
SCHEME
- implement Flux-CT scheme for the staggered magnetic field
integration;
BLOCK STRUCTURE
- use more space efficient storage of the variables, which means
storing only staggered components of magnetic field; cell-centered
components are calculated only when necessary;
EVOLUTION
- remove loops in the field updates; operations are performed on the
arrays;
BOUNDARY CONDITIONS
- remove loops in the bnd_copy; operations are calculated on the whole
array now;
INTERPOLATION
- subroutine magtocen() has been rewritten to avoid problems with the
array allocation; now as an argument we enter the array of all
variables; subroutine uses indices for the face-centered and
cell-centered magnetic field components internally;
MAKE
- add flag defining Flux-CT scheme;
PROBLEM
- use predefined array variables instead of allocated;
VARIABLES
- add indices for the magnetic field components, both face and cell
centered
SOLVER, MHD
- add support for the magnetohydrodynamic (MHD) equations to the
subroutines cons2prim(), prim2cons()
- add MHD flux and the fastest speed calculation in the subroutine
fluxspeed()
- include magnetosonic speed in the calculation of the maximum speed in
the system required for estimation of the new time step
- extend the HLL solver in subroutine hll() to support MHD
- calculate the magnetic field update according to a CT scheme in the
subroutine update()
INTERPOLATION
- add subroutine magtocen() to interpolate cell centered magnetic field
EVOLUTION
- add evolution of the magnetic field components in the evolve_euler()
and evolve_rk2() time integration subroutines
- also call the subroutine magtocen() in the right places
BOUNDARY CONDITIONS
- support for magnetic field boundary update only in the case of blocks
with the same level so far; later we need to include proper
restriction and prolongation for the magnetic field to keep its
divergence equal zero
PROBLEMS, BLAST
- extend the blast problem to include the initial magnetic field
IO, HDF5
- write down in a HDF5 file magnetic field components
IO, HDF5
- rewrite subroutine write_data() using more efficient storage,
especially with compression; we are storing only the domain part of
blocks and their primitive variables, which is selected by setting
ftype='p'; storing restart files with all parameters will be done
later
BLOCKS,
- remove array c(:,:,:) from data_block which stored the values for
refinement/derefinement estimations
FIXES
- the field neigh of meta blocks structure must use ndims, nsides, and
nfaces variables
BLOCK STRUCTURE
- use pmeta and pdata names for ponters to meta and data blocks,
respectively; this helps reading the code
HOSTS
- turn on DEBUG
MESH STRUCTURES
- since blocks which are not leafs are not used to calculate the fields
evolution, we don't need to keep their data blocks allocated; these
changes allocated the parent data block only for blocks which are
derefined, and deallocates data block of a block selected for
refinement; in the case of many blocks and many refinement levels
these modifications safe a substantial amount of memory
MESH STRUCTURE
- move the bounds of the block (xmin, xmax, ...) from data to the meta
structure; in this case we can deallocate data blocks without losing
the information about it bounds
MESH STRUCTURE
- subroutine mesh_update() performs refinement/derefinement of blocks
using new structures of meta and data blocks; it is implemented for
to be MPI friendly as well
- update subroutine derefine_block() to use new structures of meta and
data blocks
- update subroutines restrict_block() and prolong_block() to use new
meta and data structures
BOUNDARY CONDITIONS
- using a list of info blocks the implementation of boundary conditions
for neighbor blocks laying on different processors is successfully
completed; it is simpler and more clear than the previous
implementation
BOUNDARY CONDITIONS
- blocks belonging to the same process can exchange boundaries now;
blocks with neighbors from other processes still need to be updated
BLOCK STRUCTURE
- a new structure block_info has been added which will be used for
defining lists of blocks which have neighbors belonging to other
processes; these lists will be later used to exchange block data
MESH STRUCTURE
- with new meta and data block structures the refinement of a block
works correctly now
- increase and decrease the number of blocks while allocated and
dealocated, respectively
MESH STRUCTURE
- add subroutines metablock_setleaf(), metablock_unsetleaf() to set and
unset the leaf flag of a meta block
- add subroutine metablock_setconfig() to set the config of a block
- add subroutine metablock_setlevel() to set the refinement level of a
block
MESH STRUCTURE
- add subroutine associate_blocks() to associate a pair of meta and
data blocks
- add subroutine datablock_setbounds() to set the geometry bounds for a
given block
MESH STRUCTURE
- add subroutines allocate_metablock() and allocate_datablock()
allocating new meta and data blocks in memory
- add subroutines append_metablock() and append_datablock() appending
the allocated block to the meta and data block lists
- create root meta and data blocks in domain_default()
- create two new pointers last_meta nad last_data pointing to the last
blocks in the meta and data block lists
MESH STRUCTURE
- initialize meta and data block lists in init_blocks()
- deallocate and nullify meta and data block lists in clear_blocks()
- new subtourine deallocate_metablock() for meta block deallocation
- new subroutine deallocate_datablock() for data block deallocation
MESH STRUCTURE
- two types of structures are created:
block_meta - structure storing metadata of each block
block_data - structure storing geometry and variable information
- two lists corresponding to stored objects of block_meta and
block_data:
list_meta - the list of objects of block_meta type is shared and
synchronized between processors
list_data - the list of objects of block_data type is private for
each processors
- corresponding pointers to both structures are created too
The initial block mesh refinement seems to be optimized now. However,
more work in the MPI version is required, e.g. refine blocks on all CPUs
until the number of blocks exceeds some number dependent on the number
of CPUs. From that point refine independently on each CPU.
The initial work on the mesh update has been done too, but the processes
do not exchange block information.
The boundary conditions finally works when using MPI with arbitrary
number of processors. This is however a dirty hack. Has to be done in a
better way later, since now it may be very tricky to generalize the code
to 3D.
The next time step should be reduced to the minimum value over all
processes. This has been added in subroutine 'evolution' now.
The block structure contains a new field %pos which specifies the
position of the child block in its parent.
The blocks of initial mesh are now divided between all processes. The
procedure updates also the %cpu field of all neighbors according to the
number of process they are running at.
The subroutine 'deallocate_block' takes care about the plist pointer
now. It also allows for the flexible removal of any block, even from the
middle of the list by updating fields %next and %prev of the previous
and next blocks, respectively.
In addition, we now store the %cpu field of each block in the HDF5 files
too.
The domain initialization is performed in the problem.F90 now. This will
allow a user to initialize his own domain according to the defined
problem. Since the initialization in problem-dependent we don't need the
subroutine allocate_blocks anymore.
We use the field 'neigh' instead of the field 'pneigh'. The field
'neigh' is of the type of 'blockref' which simplifies the reference to
the neighbors blocks when using MPI. Each reference contains two fields:
cpu and id. In MPI version we cannot use pointers stored in pneigh, but
using blockref structures we can refer to the blocks belonging to other
processors.
The new type 'blockref' is a reference to a block by two integer
numbers, the first points to the cpu that block is running, and the
second one is the block identificator.
Now we can control the levels of refinement/derefinement criterion from
the config file. The bounds of the domains are stored in the HDF5 files
from now.
The generation of dx, dy, dz for each level didn't include the base
grid, i.e. they were 2 times larger. Now it is fixed. Also plenty of
minor corrections in different places.