- add a new subroutine set_datablock_dims() to set dimensions of the
allocatable arrays in data blocks; this subroutine is called from
init_mesh(), which is the right place to initiate BLOCK module;
- remove dependency of blocks.o on variables.o;
- boundaries.o depends on timers.o;
- CONSERVATIVE flag determines if the scheme must be fully
conservative; this works only when adaptive mesh is used; in such a
case instead of update variables at each block, we first calculate
high order integration of fluxes, then we synchronize them between
blocks at different levels, and finally we perform one-step update
for each block using updated fluxes; this might be also a first step
to implement Galerkin methods;
- a new file 'mesh.log' is created with the following colums: step,
time, the number of leafs, the number of meta blocks, the coverage
efficiency, which is the number of leafs divided by the number of top
level blocks covering the whole domain, the AMR efficiency which
shows the advantage of using adaptive mesh (with boundaries taken
into account), block distribution over levels and processors;
- the AMR efficiency is the number of leafs multiplied by the number of
cells in one block (with boundaries included) and divided by the
effective resolution with boundaries included; if this parameter is
smaller than 1.0 we should expect faster calculations due to the
adaptive mesh, if the parameter is larger than 1.0, we only slow down
the calculations by using the adaptive mesh;
- the number of seeds and seed values must be stored in the restart
file in order to restart a job properly and guarantee unchanged
generation of random numbers after restart;
- add a new module INTEGRALS which handles initialization and
calculation of the conserved variables and energies;
- add make dependencies to makefile;
- call the initialization, storage and termination subroutines from the
driver;
- call subroutine evolve_forcing() before the update of all blocks;
this subroutine evolves the forcing source terms by an interval dt in
the Fourier space; then during the update the forcing Fourier
coefficients will be transformed to real space for each block
separately;
- implement subroutine evolve_forcing() which evolves the driving
components in Fourier space during the one hydrodynamic timestep; the
integrated forcing Fourier components are stored in module array
ftab; this complex array will be used to calculate forcing in real
space for each block;
- add a new module FORCING to handle forcing source terms, e.g. for
turbulence driving;
- implement initial versions of init_forcing() and clear_forcing()
subroutines;
- add compilation flag FORCING and use it during compilation process in
makefile;
- add a new module RANDOM which handles initialization and generation
of different type random number distributions;
- include the module in the compilation process;
- initialize the random generator from driver;
- implement the monotonicity preserving family of reconstruction; the
implementation covers the 5th, 7th and 9th order spacial MP
reconstruction;
- implement two new functions used by the MP reconstructions, minmod()
and median();
- pass the spacial increment to the reconstruction subroutine since
some interpolation methods require it; in addition, move obtaining
the spacial interval and its inversion to the subroutine update();
VARIABLES
- create new module 'variables' which stores references to variable
indices; we gonna store dofferent objects related to variables in
this module;
IO
- the subroutine write_data() is now a wrapper subroutine which call
the subroutine writing in a supported file format; the file format is
chosen at the time of compilation;
- the subroutine write_data_h5() is a new subroutine taking care of
initialization and storage data in the HDF5 format; depending on the
file type it calls subroutines to create specific groups and store
the right data;
- new subroutines write_atrributes_h5(), write_coordinates_h5(),
write_variables_h(), write_variables_full_h5(),
write_metablocks_h5(), write_datablocks_h5(), and number of other
supporting subroutines to store all data in the proper format for job
restart, vizualization, and debugging;
MAKE
- add option to enable/disable HDF5 file compression;
SCHEME
- implement Flux-CT scheme for the staggered magnetic field
integration;
BLOCK STRUCTURE
- use more space efficient storage of the variables, which means
storing only staggered components of magnetic field; cell-centered
components are calculated only when necessary;
EVOLUTION
- remove loops in the field updates; operations are performed on the
arrays;
BOUNDARY CONDITIONS
- remove loops in the bnd_copy; operations are calculated on the whole
array now;
INTERPOLATION
- subroutine magtocen() has been rewritten to avoid problems with the
array allocation; now as an argument we enter the array of all
variables; subroutine uses indices for the face-centered and
cell-centered magnetic field components internally;
MAKE
- add flag defining Flux-CT scheme;
PROBLEM
- use predefined array variables instead of allocated;
VARIABLES
- add indices for the magnetic field components, both face and cell
centered
SOLVER, MHD
- add support for the magnetohydrodynamic (MHD) equations to the
subroutines cons2prim(), prim2cons()
- add MHD flux and the fastest speed calculation in the subroutine
fluxspeed()
- include magnetosonic speed in the calculation of the maximum speed in
the system required for estimation of the new time step
- extend the HLL solver in subroutine hll() to support MHD
- calculate the magnetic field update according to a CT scheme in the
subroutine update()
INTERPOLATION
- add subroutine magtocen() to interpolate cell centered magnetic field
EVOLUTION
- add evolution of the magnetic field components in the evolve_euler()
and evolve_rk2() time integration subroutines
- also call the subroutine magtocen() in the right places
BOUNDARY CONDITIONS
- support for magnetic field boundary update only in the case of blocks
with the same level so far; later we need to include proper
restriction and prolongation for the magnetic field to keep its
divergence equal zero
PROBLEMS, BLAST
- extend the blast problem to include the initial magnetic field
IO, HDF5
- write down in a HDF5 file magnetic field components
PROBLEMS
- implement the blast problem in a similar way to the one implemented
in GODUNOV code
- generate initial domain of 2x3 blocks at the lowest level; it should
be generalized to any dimension of blocks
MESH STRUCTURE
- add new variable rdims(1:3) specifying dimensions of the initial
blocks structure at lowest level
- improve calculation of the effective resolution, and spatial
increments
The initial block mesh refinement seems to be optimized now. However,
more work in the MPI version is required, e.g. refine blocks on all CPUs
until the number of blocks exceeds some number dependent on the number
of CPUs. From that point refine independently on each CPU.
The initial work on the mesh update has been done too, but the processes
do not exchange block information.
The boundary conditions finally works when using MPI with arbitrary
number of processors. This is however a dirty hack. Has to be done in a
better way later, since now it may be very tricky to generalize the code
to 3D.
The next time step should be reduced to the minimum value over all
processes. This has been added in subroutine 'evolution' now.
The block structure contains a new field %pos which specifies the
position of the child block in its parent.
There are several conditions for the boundaries. If the neighbor block
is on the same processor do as usuall. If it lays on a different
processor, prepare it for exchange. After echanging blocks we need to
perform similar update as it would lay on the same processor, i.e. we
need to perform copying, restriction, or prolongation.
The blocks of initial mesh are now divided between all processes. The
procedure updates also the %cpu field of all neighbors according to the
number of process they are running at.
The subroutine 'deallocate_block' takes care about the plist pointer
now. It also allows for the flexible removal of any block, even from the
middle of the list by updating fields %next and %prev of the previous
and next blocks, respectively.
In addition, we now store the %cpu field of each block in the HDF5 files
too.
A new 'mpitools' module has been added. This module contains subroutines
to initialize, deinitialize and handle MPI communication. This initial
version can now initialize and deinitialize parallelization. It does not
support full parallelization yet.
Now, we can define any shape for a given problem inside the domain,
which is not updated during the evolution. This allows for using the
sources of any kind in the problem studies, such as the colliding winds
in the binary stars.
It looks like the refining and derefining work more or less, at least
without interrupting the execution. Nevertheless, there are still some
artifacts, like the lack of symmetry after some time or not efficient
derefining of the mesh. This could be cause by the solver, however.
The refinement criterion is computed using pressure now.
Now all block update their boundary values from the neighboring blocks.
Blocks of the same level copy their values, but block of different
levels shrink or expand the boundary values from the neighbor,
depending on the level change. In order to complete this I've
implemented subroutines expand and interpolate in the interpolation
module.
By default the boundary conditions are set to periodic. So far only this
type of conditions is supported, but the other types should be easy to
implement.