- pass the spacial increment to the reconstruction subroutine since
some interpolation methods require it; in addition, move obtaining
the spacial interval and its inversion to the subroutine update();
VARIABLES
- create new module 'variables' which stores references to variable
indices; we gonna store dofferent objects related to variables in
this module;
IO
- the subroutine write_data() is now a wrapper subroutine which call
the subroutine writing in a supported file format; the file format is
chosen at the time of compilation;
- the subroutine write_data_h5() is a new subroutine taking care of
initialization and storage data in the HDF5 format; depending on the
file type it calls subroutines to create specific groups and store
the right data;
- new subroutines write_atrributes_h5(), write_coordinates_h5(),
write_variables_h(), write_variables_full_h5(),
write_metablocks_h5(), write_datablocks_h5(), and number of other
supporting subroutines to store all data in the proper format for job
restart, vizualization, and debugging;
MAKE
- add option to enable/disable HDF5 file compression;
SCHEME
- implement Flux-CT scheme for the staggered magnetic field
integration;
BLOCK STRUCTURE
- use more space efficient storage of the variables, which means
storing only staggered components of magnetic field; cell-centered
components are calculated only when necessary;
EVOLUTION
- remove loops in the field updates; operations are performed on the
arrays;
BOUNDARY CONDITIONS
- remove loops in the bnd_copy; operations are calculated on the whole
array now;
INTERPOLATION
- subroutine magtocen() has been rewritten to avoid problems with the
array allocation; now as an argument we enter the array of all
variables; subroutine uses indices for the face-centered and
cell-centered magnetic field components internally;
MAKE
- add flag defining Flux-CT scheme;
PROBLEM
- use predefined array variables instead of allocated;
VARIABLES
- add indices for the magnetic field components, both face and cell
centered
SOLVER, MHD
- add support for the magnetohydrodynamic (MHD) equations to the
subroutines cons2prim(), prim2cons()
- add MHD flux and the fastest speed calculation in the subroutine
fluxspeed()
- include magnetosonic speed in the calculation of the maximum speed in
the system required for estimation of the new time step
- extend the HLL solver in subroutine hll() to support MHD
- calculate the magnetic field update according to a CT scheme in the
subroutine update()
INTERPOLATION
- add subroutine magtocen() to interpolate cell centered magnetic field
EVOLUTION
- add evolution of the magnetic field components in the evolve_euler()
and evolve_rk2() time integration subroutines
- also call the subroutine magtocen() in the right places
BOUNDARY CONDITIONS
- support for magnetic field boundary update only in the case of blocks
with the same level so far; later we need to include proper
restriction and prolongation for the magnetic field to keep its
divergence equal zero
PROBLEMS, BLAST
- extend the blast problem to include the initial magnetic field
IO, HDF5
- write down in a HDF5 file magnetic field components
PROBLEMS
- implement the blast problem in a similar way to the one implemented
in GODUNOV code
- generate initial domain of 2x3 blocks at the lowest level; it should
be generalized to any dimension of blocks
MESH STRUCTURE
- add new variable rdims(1:3) specifying dimensions of the initial
blocks structure at lowest level
- improve calculation of the effective resolution, and spatial
increments
The initial block mesh refinement seems to be optimized now. However,
more work in the MPI version is required, e.g. refine blocks on all CPUs
until the number of blocks exceeds some number dependent on the number
of CPUs. From that point refine independently on each CPU.
The initial work on the mesh update has been done too, but the processes
do not exchange block information.
The boundary conditions finally works when using MPI with arbitrary
number of processors. This is however a dirty hack. Has to be done in a
better way later, since now it may be very tricky to generalize the code
to 3D.
The next time step should be reduced to the minimum value over all
processes. This has been added in subroutine 'evolution' now.
The block structure contains a new field %pos which specifies the
position of the child block in its parent.
There are several conditions for the boundaries. If the neighbor block
is on the same processor do as usuall. If it lays on a different
processor, prepare it for exchange. After echanging blocks we need to
perform similar update as it would lay on the same processor, i.e. we
need to perform copying, restriction, or prolongation.
The blocks of initial mesh are now divided between all processes. The
procedure updates also the %cpu field of all neighbors according to the
number of process they are running at.
The subroutine 'deallocate_block' takes care about the plist pointer
now. It also allows for the flexible removal of any block, even from the
middle of the list by updating fields %next and %prev of the previous
and next blocks, respectively.
In addition, we now store the %cpu field of each block in the HDF5 files
too.
A new 'mpitools' module has been added. This module contains subroutines
to initialize, deinitialize and handle MPI communication. This initial
version can now initialize and deinitialize parallelization. It does not
support full parallelization yet.
Now, we can define any shape for a given problem inside the domain,
which is not updated during the evolution. This allows for using the
sources of any kind in the problem studies, such as the colliding winds
in the binary stars.
It looks like the refining and derefining work more or less, at least
without interrupting the execution. Nevertheless, there are still some
artifacts, like the lack of symmetry after some time or not efficient
derefining of the mesh. This could be cause by the solver, however.
The refinement criterion is computed using pressure now.
Now all block update their boundary values from the neighboring blocks.
Blocks of the same level copy their values, but block of different
levels shrink or expand the boundary values from the neighbor,
depending on the level change. In order to complete this I've
implemented subroutines expand and interpolate in the interpolation
module.
By default the boundary conditions are set to periodic. So far only this
type of conditions is supported, but the other types should be easy to
implement.
The subroutine 'boundary' sweeps over all leaf blocks. For each block it
sweeps over its neighbors and performs update of the boundaries. This is
an initial version yet, it supports only neighboring blocks of the same
level of refinement.
This new module contains subroutines used to different kind of
interpolation. So far it is implemented only one subroutine used for
reconstruction of the left and right states from center values. This
interpolation has TVD property.
We compute dFdx contribution along each direction and update total dU.
Apart from that I've added generation of coordinate variables, like dx,
dxi, etc. in the mesh module. The next step is to implement the HLL
solver.
This new module contains subroutines to calculate state vector update
using an approximate Riemann solver. The initial version contains only
a draft subroutine calculating the state vector increment dU.
In addition, I've done some fixes to the problem initialization, and I
defined new variables igrids, jgrids, kgrids, which specify the
dimensions of the block.
Now the source file can use some of the values defined in make files,
like NDIMS. The make files, i.e. makefile, make.default, and host files
are simpler and should be easier to manage.
A new module for the time integration has been added. This module
contains a set of subroutines to perform one step time integration of
each leaf block using 2nd order Runge-Kutta method. More methods can be
added later. Time 't', timestep 'dt' and iteration 'n' have been moved
to this module as well.
A new module called 'timer' has been added. The purpose of this module
is to handle the execution timers of the program or its subtasks, like
initialization, data writing, evolution, etc.
The final summary of the times spent on different task has been added
as well.
I've implemented storing some of the block attributes in HDF5 files. I
shall complete this by writing down neighbors and children. Then I shall
implement storing data arrays.
Initial implementation of subroutine 'write_data' to store all data.
This subroutine initially will support only HDF5 file format. The
block structure of the file is not decided yet.
Implemented the generation of initial blocks in N-configuration with
proper ataching to the list, pointer, neighbors and bounds
initialization, as well as the initial problem setup. Two new files has
been added: mesh.F90 - hadling the adaptive mesh structure, and
problem.F90 - handling the problem initialization.
The parameters iblocks, jblocks, and kblocks specify the number of
initial blocks along each direction. This creates the base domain for
further refinement.
Config module takes care of the configuration file 'config.in'. It
reads the file line by line, parsing each line and extracting the name
and value of parameter. Then it substitutes the parameter with a new
value.
Error module handles error, warning and informative messages. So far it
only prints information on the screen.
There is some improvments for comments formating as well.
Block structure has been implemented. Blocks contain pointers to the next
and previous blocks. This easily allows to create block lists. Two
functions are implemented, one for creating initial structure of blocks
with lists, and the second for destroying all blocks from the list.
File 'blocks.F90' has been added to makefile too.