- the new estimator uses the remaining time, time step and the previous
step execution time to estimate the remaining execution time; this
estimate varies much more for the initial steps, and for short
models, but may better estimate the remaining time for already
developed models, where the time step and number of blocks do not
change too much;
- the problem was demonstrating in the way that after restart the
number of leafs was set to zero; this was because while recreating
the meta blocks structure we set nleafs directly instead of using
metablock_set_leaf(); after using this subroutine we don't need to
to set nleafs anymore, and the restart continues properly;
- reorganize order of subroutines and separate them between groups for
meta and data blocks, refinement, and private functions and
subroutines;
- change the copyright header; update to new email address;
- make all module pointers, variables, and subroutines private and
allow only for subset of them to be public;
- improve the description of block structures;
- add several functions to return module variables such as the number
of meta and data blocks, the number of leafs, the last id, etc.;
- update several subroutines from other modules which required some of
the BLOCK module variables;
- the level resolution array 'res' has been moved to mesh module, where
it should belong; due to this it needs to be provided as an argument
to subroutines in BLOCK and PROBLEM modules which use it;
- the level resolution array stores block resolutions in each
directions, so it is now a two dimensional array, with the dimensions
corresponding to the number of levels and number of dimensions;
- update subroutine write_coordinates_h5() to store the level
resolution array properly;
- add a new subroutine set_datablock_dims() to set dimensions of the
allocatable arrays in data blocks; this subroutine is called from
init_mesh(), which is the right place to initiate BLOCK module;
- remove dependency of blocks.o on variables.o;
- boundaries.o depends on timers.o;
- now the file counters for data and restart files are separated and
moved to the IO module;
- the number of arguments to the IO subroutines is reduced to the
necessary minimum;
- the maximum speed in the system 'cmax' is not only used to calculate
the time step, but also to propagate and diffuse the wave of B
divergence in the MHD case; therefore we need to store it in the
attributes of restart files, and then restore it in order to keep
consistency in the problem evolution;
- if we use forcing in the case of adiabatic equation of state, the
total energy of each cell must be updated by the kinetic energy
injected to this cell; otherwise the energy conservation will be
violated;
- the initial version of MPI in boundary_correct_fluxes() works well,
but it needs some rewriting, since the amount of data send between
processors is definitely to large;
- implement subroutine flux_rk2() which calculates fluxes at time t and
t + dt and averages them; then the calculated flux is passed to
variable update;
- implement subroutine advance_solution() which advances an array of
conserved variables using fluxes provided as an argument;
- use advance_solution() in update_solution();
- subroutine boundary_correct_fluxes() sweeps over all leafs and if it
finds that a neighbor is at higher level, performs a correction of
the numerical flux of current block in this direction; this
subroutine does not support MPI yet!;
- subroutine correct_flux() performs the atual correction of the
numerical flux; only part belonging to domain is updated, since ghost
zones are updated anyway in the next step during the variable update;
- call boundary_correct_flux() from evolve();
- after calculating an integrals of fluxes with a given order we
performed one step updated of the conserved variables stored in data
blocks using corresponding numerical flux stored in the same data
block; the subroutine which performs update is called
update_solution();
- if the flag CONSERVATIVE is enable, we need to return the numerical
fluxes instead of their derivatives; the directional derivatives are
calculated after sunchronization of the flux;
- the purpose of a new allocatable array in the block structure is to
store the numerical fluxes, which need to be synchronized, and then
used to update conserved variables in a conservative way;
- CONSERVATIVE flag determines if the scheme must be fully
conservative; this works only when adaptive mesh is used; in such a
case instead of update variables at each block, we first calculate
high order integration of fluxes, then we synchronize them between
blocks at different levels, and finally we perform one-step update
for each block using updated fluxes; this might be also a first step
to implement Galerkin methods;
- the program now handles properly some of the termination signals,
which means that if the program receives a signal to terminate its
execution, it will finish the current loop, write restart files,
end then exit properly;
- a new file 'mesh.log' is created with the following colums: step,
time, the number of leafs, the number of meta blocks, the coverage
efficiency, which is the number of leafs divided by the number of top
level blocks covering the whole domain, the AMR efficiency which
shows the advantage of using adaptive mesh (with boundaries taken
into account), block distribution over levels and processors;
- the AMR efficiency is the number of leafs multiplied by the number of
cells in one block (with boundaries included) and divided by the
effective resolution with boundaries included; if this parameter is
smaller than 1.0 we should expect faster calculations due to the
adaptive mesh, if the parameter is larger than 1.0, we only slow down
the calculations by using the adaptive mesh;
- it seems that when we compare side derivatives for interpolated
states to zero, there are some problems with symmetry; using a small
value 'eps' instead of 0.0 solves this problem;