The boundary conditions finally works when using MPI with arbitrary
number of processors. This is however a dirty hack. Has to be done in a
better way later, since now it may be very tricky to generalize the code
to 3D.
The next time step should be reduced to the minimum value over all
processes. This has been added in subroutine 'evolution' now.
The block structure contains a new field %pos which specifies the
position of the child block in its parent.
Now, in the boundary subroutine we collect information about the blocks
which neighbors lay on a different processor. Using this information we
send neighboring blocks and update the block boundaries. This is the
initial implementation, not very well optimized and containg some bugs.
Now, we can define any shape for a given problem inside the domain,
which is not updated during the evolution. This allows for using the
sources of any kind in the problem studies, such as the colliding winds
in the binary stars.
The generation of dx, dy, dz for each level didn't include the base
grid, i.e. they were 2 times larger. Now it is fixed. Also plenty of
minor corrections in different places.
It looks like the refining and derefining work more or less, at least
without interrupting the execution. Nevertheless, there are still some
artifacts, like the lack of symmetry after some time or not efficient
derefining of the mesh. This could be cause by the solver, however.
The refinement criterion is computed using pressure now.
The subroutine 'boundary' sweeps over all leaf blocks. For each block it
sweeps over its neighbors and performs update of the boundaries. This is
an initial version yet, it supports only neighboring blocks of the same
level of refinement.
The function to calculate the maximum speed in the block has been added.
This function is used to determine the maximum speed globally, which is
next used to estimate the next time step.
We compute dFdx contribution along each direction and update total dU.
Apart from that I've added generation of coordinate variables, like dx,
dxi, etc. in the mesh module. The next step is to implement the HLL
solver.
This new module contains subroutines to calculate state vector update
using an approximate Riemann solver. The initial version contains only
a draft subroutine calculating the state vector increment dU.
In addition, I've done some fixes to the problem initialization, and I
defined new variables igrids, jgrids, kgrids, which specify the
dimensions of the block.
One array U, which is a field in the BLOCK structure, stores all
variables. The number of variables is determined by the nvars parameter.
To access each variable we use variable index now, like idn, imx, imy,
mz, ien, etc.
A new module for the time integration has been added. This module
contains a set of subroutines to perform one step time integration of
each leaf block using 2nd order Runge-Kutta method. More methods can be
added later. Time 't', timestep 'dt' and iteration 'n' have been moved
to this module as well.