BOUNDARY CONDITIONS
- update boundary conditions starting from the block with the maximum
refinement level and stepping down; this fixes update of the
boundary conditions in the corners;
INTERPOLATION
- fix divergence free prolongation of the staggered magnetic field;
BOUNDARY CONDITIONS
- rewrite subroutine bnd_prol() and use expand_mag() in it;
- fix some index calculations in bnd_rest();
INTERPOLATION
- implement new subroutine expand_mag() performing divergence-free
prolongation of the staggered magnetic field;
- implement divergence-free correction in interpolation of cell
centered magnetic field in subroutine magtocen();
MESH
- use new subroutine expand_mag() in block prolongation;
BOUNDARY CONDITIONS
- use TVD interpolation for prolongation of the boundary conditions
CONFIGURATION
- put lower limit for the number of ghost and domain cells
HOST FILES
- host files should not be included in the revision control
INTERPOLATION
- define all arrays as REAL, not REAL(KIND=8) since the precision of
calculations is determined at the compilation stage
- replace j0 and j1 indices with new more obvious il and ir
MESH
- update mesh before calculating new time step
BOUNDARY CONDITIONS
- rewrite bnd_copy() so first the indices are prepared and then there
is one call to fill out the destination array; include the lower
index change for staggered magnetic field components;
- in the subroutine bnd_rest() implement correct restriction of the
staggered magnetic field components; take into account the change
of the lower index;
- in the subroutine bnd_prol() implement correct prolongation of the
staggered magnetic field components; take into account the change of
the lowe index;
- by default use 2 ghost cell and the restriction and expansion to
prevent errors resulting from interpolation;
CONFIGURATION
- enforce a lower limit for number of ghost zones to 6 when MHD=Y and
4 otherwise
INTERPOLATION
- remove unnecessary interpolation methods
- limit 'c' interpolation to the second order only
MESH
- implement divergence-free prolongation and restriction for the
staggered magnetic field components in the block refinement and
derefinement
SCHEME
- implement Flux-CT scheme for the staggered magnetic field
integration;
BLOCK STRUCTURE
- use more space efficient storage of the variables, which means
storing only staggered components of magnetic field; cell-centered
components are calculated only when necessary;
EVOLUTION
- remove loops in the field updates; operations are performed on the
arrays;
BOUNDARY CONDITIONS
- remove loops in the bnd_copy; operations are calculated on the whole
array now;
INTERPOLATION
- subroutine magtocen() has been rewritten to avoid problems with the
array allocation; now as an argument we enter the array of all
variables; subroutine uses indices for the face-centered and
cell-centered magnetic field components internally;
MAKE
- add flag defining Flux-CT scheme;
PROBLEM
- use predefined array variables instead of allocated;
Rewritten boundaries allow for a proper handling boundaries between
blocks at different refinement levels. Prolongation and restriction of
the boundaries are improved now.
Rewritten interpolation for prolongation and restriction.
References to the variable indices are assigned more properly.
VARIABLES
- add indices for the magnetic field components, both face and cell
centered
SOLVER, MHD
- add support for the magnetohydrodynamic (MHD) equations to the
subroutines cons2prim(), prim2cons()
- add MHD flux and the fastest speed calculation in the subroutine
fluxspeed()
- include magnetosonic speed in the calculation of the maximum speed in
the system required for estimation of the new time step
- extend the HLL solver in subroutine hll() to support MHD
- calculate the magnetic field update according to a CT scheme in the
subroutine update()
INTERPOLATION
- add subroutine magtocen() to interpolate cell centered magnetic field
EVOLUTION
- add evolution of the magnetic field components in the evolve_euler()
and evolve_rk2() time integration subroutines
- also call the subroutine magtocen() in the right places
BOUNDARY CONDITIONS
- support for magnetic field boundary update only in the case of blocks
with the same level so far; later we need to include proper
restriction and prolongation for the magnetic field to keep its
divergence equal zero
PROBLEMS, BLAST
- extend the blast problem to include the initial magnetic field
IO, HDF5
- write down in a HDF5 file magnetic field components
DATA I/O, HDF5
- arrays of indices, levels, and the block bounds cannot be stored as
attributes since they can easily exceed the allowed space for
attributes; instead, they have to be stored as dataset, which
overcome this limit
- store the lowest level block dimensions as well
IO, HDF5
- rewrite subroutine write_data() using more efficient storage,
especially with compression; we are storing only the domain part of
blocks and their primitive variables, which is selected by setting
ftype='p'; storing restart files with all parameters will be done
later
BLOCKS,
- remove array c(:,:,:) from data_block which stored the values for
refinement/derefinement estimations
MESH REFINEMENT, INITIAL GEOMETRY
- if there is only one block and it is set not to be refined, refine it
anyway to catch situations when the initial resolution is too small
for the problem initiation
MESH STRUCTURE, REFINEMENT
- subroutine prolong_block() has been completely rewritten; now it
supports 2D and 3D boxes, and the calculation of bounds for the
expanded array is automated now
MESH STRUCTURE, DEREFINEMENT
- subroutine restrict_block() has been completely rewritten; now it
supports 2D and 3D boxes, and the calculation of bounds for
source and destination arrays is automated
FIXES
- the field neigh of meta blocks structure must use ndims, nsides, and
nfaces variables
BLOCK STRUCTURE
- use pmeta and pdata names for ponters to meta and data blocks,
respectively; this helps reading the code
HOSTS
- turn on DEBUG
MESH STRUCTURE, MPI
- implement auto balancing, which resends the blocks to the neighboring
processors in order to keep the same number of data blocks on each
processor
MESH STRUCTURE, MPI
- correct the initial division of the geometry, so each process is
attached to the same number of blocks; blocks at the top level are
allowed to be attached to different processors even if they are
children of the same block
MESH STRUCTURE, MPI
- if the child blocks are selected to be derefined and lay on different
processors, move them all to the processor of the parent block
BLOCK STRUCTURE, MPI
- in the best division calculation use the number of data blocs only,
but if the blocks are at the top level and have the same parent
pack then together in the same process
MESH STRUCTURES
- since blocks which are not leafs are not used to calculate the fields
evolution, we don't need to keep their data blocks allocated; these
changes allocated the parent data block only for blocks which are
derefined, and deallocates data block of a block selected for
refinement; in the case of many blocks and many refinement levels
these modifications safe a substantial amount of memory
MESH STRUCTURE
- move the bounds of the block (xmin, xmax, ...) from data to the meta
structure; in this case we can deallocate data blocks without losing
the information about it bounds
PROBLEMS
- implement the blast problem in a similar way to the one implemented
in GODUNOV code
- generate initial domain of 2x3 blocks at the lowest level; it should
be generalized to any dimension of blocks
MESH STRUCTURE
- add new variable rdims(1:3) specifying dimensions of the initial
blocks structure at lowest level
- improve calculation of the effective resolution, and spatial
increments
MESH STRUCTURE
- subroutine mesh_update() performs refinement/derefinement of blocks
using new structures of meta and data blocks; it is implemented for
to be MPI friendly as well
- update subroutine derefine_block() to use new structures of meta and
data blocks
- update subroutines restrict_block() and prolong_block() to use new
meta and data structures