BLOCK STRUCTURE
- add new array to the data block structure to store the electromotive
force components; these components are located at the centers of cell
edges, in this way the CT update of the staggered magnetic field
component will be easier;
EVOLUTION
- add new subroutine advance() which performs several steps in order to
advance the solution in time by one-step update; the substeps are the
updates of the numerical flux, the flux boundary, the advance in time
the solution, the updates of mesh structure, and the boundaries of
conserved variables, and finally the new time step estimation;
- add new subroutine update_flux() to update the numerical fluxes
stored in the data blocks;
SCHEME
- add new subroutine numerical_flux() to calculate fluxes at the proper
locations;
- add new logical argument to HLL and HLLC in order to specify if the
flux should be returned as a numerical flux or its derivative;
EVOLUTION
- add new output argument returning the updated fluxes for each block;
SCHEME
- update the fluxes from the one dimensional fluxes returned by the
Riemann solver;
BOUNDARY CONDITIONS
- use TVD interpolation for prolongation of the boundary conditions
CONFIGURATION
- put lower limit for the number of ghost and domain cells
HOST FILES
- host files should not be included in the revision control
INTERPOLATION
- define all arrays as REAL, not REAL(KIND=8) since the precision of
calculations is determined at the compilation stage
- replace j0 and j1 indices with new more obvious il and ir
MESH
- update mesh before calculating new time step
SCHEME
- implement Flux-CT scheme for the staggered magnetic field
integration;
BLOCK STRUCTURE
- use more space efficient storage of the variables, which means
storing only staggered components of magnetic field; cell-centered
components are calculated only when necessary;
EVOLUTION
- remove loops in the field updates; operations are performed on the
arrays;
BOUNDARY CONDITIONS
- remove loops in the bnd_copy; operations are calculated on the whole
array now;
INTERPOLATION
- subroutine magtocen() has been rewritten to avoid problems with the
array allocation; now as an argument we enter the array of all
variables; subroutine uses indices for the face-centered and
cell-centered magnetic field components internally;
MAKE
- add flag defining Flux-CT scheme;
PROBLEM
- use predefined array variables instead of allocated;
Rewritten boundaries allow for a proper handling boundaries between
blocks at different refinement levels. Prolongation and restriction of
the boundaries are improved now.
Rewritten interpolation for prolongation and restriction.
References to the variable indices are assigned more properly.
VARIABLES
- add indices for the magnetic field components, both face and cell
centered
SOLVER, MHD
- add support for the magnetohydrodynamic (MHD) equations to the
subroutines cons2prim(), prim2cons()
- add MHD flux and the fastest speed calculation in the subroutine
fluxspeed()
- include magnetosonic speed in the calculation of the maximum speed in
the system required for estimation of the new time step
- extend the HLL solver in subroutine hll() to support MHD
- calculate the magnetic field update according to a CT scheme in the
subroutine update()
INTERPOLATION
- add subroutine magtocen() to interpolate cell centered magnetic field
EVOLUTION
- add evolution of the magnetic field components in the evolve_euler()
and evolve_rk2() time integration subroutines
- also call the subroutine magtocen() in the right places
BOUNDARY CONDITIONS
- support for magnetic field boundary update only in the case of blocks
with the same level so far; later we need to include proper
restriction and prolongation for the magnetic field to keep its
divergence equal zero
PROBLEMS, BLAST
- extend the blast problem to include the initial magnetic field
IO, HDF5
- write down in a HDF5 file magnetic field components
MESH STRUCTURE
- subroutine mesh_update() performs refinement/derefinement of blocks
using new structures of meta and data blocks; it is implemented for
to be MPI friendly as well
- update subroutine derefine_block() to use new structures of meta and
data blocks
- update subroutines restrict_block() and prolong_block() to use new
meta and data structures
BOUNDARY CONDITIONS
- using a list of info blocks the implementation of boundary conditions
for neighbor blocks laying on different processors is successfully
completed; it is simpler and more clear than the previous
implementation
BLOCK STRUCTURE: MPI
- divide initial block structure between all processors and remove
non-local data blocks
DATA I/O
- dump the total number of blocks and the number of data blocks
BOUNDARY CONDITIONS
- initial work on the boundary condition for new structure of blocks
It seems to be pretty complex. So far I collect blocks selected for
refinement which have neighbors laying on other processors. Using this I
set the neighbors from other processors for refinement if required. The
remaining things are the neighbors update after refinement and MPI
version of derefinement.
The boundary conditions finally works when using MPI with arbitrary
number of processors. This is however a dirty hack. Has to be done in a
better way later, since now it may be very tricky to generalize the code
to 3D.
The next time step should be reduced to the minimum value over all
processes. This has been added in subroutine 'evolution' now.
The block structure contains a new field %pos which specifies the
position of the child block in its parent.
Now, in the boundary subroutine we collect information about the blocks
which neighbors lay on a different processor. Using this information we
send neighboring blocks and update the block boundaries. This is the
initial implementation, not very well optimized and containg some bugs.
Now, we can define any shape for a given problem inside the domain,
which is not updated during the evolution. This allows for using the
sources of any kind in the problem studies, such as the colliding winds
in the binary stars.
The generation of dx, dy, dz for each level didn't include the base
grid, i.e. they were 2 times larger. Now it is fixed. Also plenty of
minor corrections in different places.
It looks like the refining and derefining work more or less, at least
without interrupting the execution. Nevertheless, there are still some
artifacts, like the lack of symmetry after some time or not efficient
derefining of the mesh. This could be cause by the solver, however.
The refinement criterion is computed using pressure now.
The subroutine 'boundary' sweeps over all leaf blocks. For each block it
sweeps over its neighbors and performs update of the boundaries. This is
an initial version yet, it supports only neighboring blocks of the same
level of refinement.
The function to calculate the maximum speed in the block has been added.
This function is used to determine the maximum speed globally, which is
next used to estimate the next time step.
We compute dFdx contribution along each direction and update total dU.
Apart from that I've added generation of coordinate variables, like dx,
dxi, etc. in the mesh module. The next step is to implement the HLL
solver.
This new module contains subroutines to calculate state vector update
using an approximate Riemann solver. The initial version contains only
a draft subroutine calculating the state vector increment dU.
In addition, I've done some fixes to the problem initialization, and I
defined new variables igrids, jgrids, kgrids, which specify the
dimensions of the block.
One array U, which is a field in the BLOCK structure, stores all
variables. The number of variables is determined by the nvars parameter.
To access each variable we use variable index now, like idn, imx, imy,
mz, ien, etc.
A new module for the time integration has been added. This module
contains a set of subroutines to perform one step time integration of
each leaf block using 2nd order Runge-Kutta method. More methods can be
added later. Time 't', timestep 'dt' and iteration 'n' have been moved
to this module as well.