128 Commits

Author SHA1 Message Date
Grzegorz Kowal
98849f3369 Prepare code for central differencing scheme for magnetic field. 2010-02-22 12:01:58 -03:00
Grzegorz Kowal
bbf998e4a0 Boundaries, interpolation, indices.
Rewritten boundaries allow for a proper handling boundaries between
blocks at different refinement levels. Prolongation and restriction of
the boundaries are improved now.

Rewritten interpolation for prolongation and restriction.

References to the variable indices are assigned more properly.
2010-02-11 23:30:46 -02:00
Grzegorz Kowal
beee9ee2d4 Initial support for the MHD equations.
VARIABLES

 - add indices for the magnetic field components, both face and cell
   centered

SOLVER, MHD

 - add support for the magnetohydrodynamic (MHD) equations to the
   subroutines cons2prim(), prim2cons()

 - add MHD flux and the fastest speed calculation in the subroutine
   fluxspeed()

 - include magnetosonic speed in the calculation of the maximum speed in
   the system required for estimation of the new time step

 - extend the HLL solver in subroutine hll() to support MHD

 - calculate the magnetic field update according to a CT scheme in the
   subroutine update()

INTERPOLATION

 - add subroutine magtocen() to interpolate cell centered magnetic field

EVOLUTION

 - add evolution of the magnetic field components in the evolve_euler()
   and evolve_rk2() time integration subroutines

 - also call the subroutine magtocen() in the right places

BOUNDARY CONDITIONS

 - support for magnetic field boundary update only in the case of blocks
   with the same level so far; later we need to include proper
   restriction and prolongation for the magnetic field to keep its
   divergence equal zero

PROBLEMS, BLAST

 - extend the blast problem to include the initial magnetic field

IO, HDF5

 - write down in a HDF5 file magnetic field components
2009-10-28 00:12:18 -02:00
Grzegorz Kowal
0924f90bd7 Implement mesh update uniprocessors and MPI-friendly.
MESH STRUCTURE

 - subroutine mesh_update() performs refinement/derefinement of blocks
   using new structures of meta and data blocks; it is implemented for
   to be MPI friendly as well

 - update subroutine derefine_block() to use new structures of meta and
   data blocks

 - update subroutines restrict_block() and prolong_block() to use new
   meta and data structures
2009-09-21 16:57:34 -03:00
Grzegorz Kowal
25d035fd68 Fix prolongation of the boundary conditions.
BOUNDARY CONDITIONS

 - fix prolongation of the boundary conditions
 - use the same subroutines for the uniprocessor and MPI version
   boundaries
2009-09-21 01:12:32 -03:00
Grzegorz Kowal
f399274574 Finish implementing proper handling of boundary conditions for MPI.
BOUNDARY CONDITIONS

 - using a list of info blocks the implementation of boundary conditions
   for neighbor blocks laying on different processors is successfully
   completed; it is simpler and more clear than the previous
   implementation
2009-09-18 20:34:23 -03:00
Grzegorz Kowal
ac188d32e4 Divide blocks among processors. Initial work on boundaries.
BLOCK STRUCTURE: MPI

 - divide initial block structure between all processors and remove
   non-local data blocks

DATA I/O

 - dump the total number of blocks and the number of data blocks

BOUNDARY CONDITIONS

 - initial work on the boundary condition for new structure of blocks
2009-09-14 19:15:21 -03:00
Grzegorz Kowal
bf5e332ce1 Evolve new list of data blocks.
TIME INTEGRATION

 - now update the solution using new list of data blocks belonging to
   the current process only
2009-09-14 18:28:17 -03:00
Grzegorz Kowal
7827e60f30 Rework subroutine refine_block() to use new meta and data blocks. 2009-09-11 21:52:18 -03:00
Grzegorz Kowal
c216adb422 First approach to implement MPI refinement.
It seems to be pretty complex. So far I collect blocks selected for
refinement which have neighbors laying on other processors. Using this I
set the neighbors from other processors for refinement if required. The
remaining things are the neighbors update after refinement and MPI
version of derefinement.
2009-01-03 22:49:04 -06:00
Grzegorz Kowal
c686e301fc Fix boundary update and reduce the next time step over all CPUs.
The boundary conditions finally works when using MPI with arbitrary
number of processors. This is however a dirty hack. Has to be done in a
better way later, since now it may be very tricky to generalize the code
to 3D.

The next time step should be reduced to the minimum value over all
processes. This has been added in subroutine 'evolution' now.

The block structure contains a new field %pos which specifies the
position of the child block in its parent.
2009-01-02 20:18:57 -06:00
Grzegorz Kowal
6c620a97cf Implement boundary block exchange between processes.
Now, in the boundary subroutine we collect information about the blocks
which neighbors lay on a different processor. Using this information we
send neighboring blocks and update the block boundaries. This is the
initial implementation, not very well optimized and containg some bugs.
2008-12-31 12:02:36 -06:00
Grzegorz Kowal
1aa75ac2ef Change the leaf field of the block to be logical.
This change should speed up a bit the conditions, since it is faster
than the character comparisons.
2008-12-19 00:02:05 -06:00
Grzegorz Kowal
1206194deb Add support for the shapes in the domain.
Now, we can define any shape for a given problem inside the domain,
which is not updated during the evolution. This allows for using the
sources of any kind in the problem studies, such as the colliding winds
in the binary stars.
2008-12-18 12:18:36 -06:00
Grzegorz Kowal
e56aee34c0 Play more with the refinements criterion. 2008-12-16 13:40:34 -06:00
Grzegorz Kowal
f673596f20 Refinement based on density. More cleanups. 2008-12-13 22:41:37 -06:00
Grzegorz Kowal
b39673bf85 Fixed generation of dx, dy, and dz.
The generation of dx, dy, dz for each level didn't include the base
grid, i.e. they were 2 times larger. Now it is fixed. Also plenty of
minor corrections in different places.
2008-12-13 21:05:51 -06:00
Grzegorz Kowal
7bc74a39d9 Initial version of refinement/derefinement is working.
It looks like the refining and derefining work more or less, at least
without interrupting the execution. Nevertheless, there are still some
artifacts, like the lack of symmetry after some time or not efficient
derefining of the mesh. This could be cause by the solver, however.

The refinement criterion is computed using pressure now.
2008-12-13 15:08:18 -06:00
Grzegorz Kowal
d77dc8079e A few bugs fixed. Some reorganization of dimension variables. 2008-12-09 22:28:29 -06:00
Grzegorz Kowal
71a602156a New time integration method EULER. 2008-12-09 21:06:21 -06:00
Grzegorz Kowal
936420c599 New module to handle the boundary conditions.
The subroutine 'boundary' sweeps over all leaf blocks. For each block it
sweeps over its neighbors and performs update of the boundaries. This is
an initial version yet, it supports only neighboring blocks of the same
level of refinement.
2008-12-09 20:37:31 -06:00
Grzegorz Kowal
6d38fe6476 Added CFL condition and calculation of the new time step.
The function to calculate the maximum speed in the block has been added.
This function is used to determine the maximum speed globally, which is
next used to estimate the next time step.
2008-12-09 14:51:33 -06:00
Grzegorz Kowal
7deada5a90 Updated comments in the evolution module. 2008-12-08 21:07:10 -06:00
Grzegorz Kowal
c48f30bd20 Update sweeps over direction calculating dU.
We compute dFdx contribution along each direction and update total dU.
Apart from that I've added generation of coordinate variables, like dx,
dxi, etc. in the mesh module. The next step is to implement the HLL
solver.
2008-12-08 20:03:01 -06:00
Grzegorz Kowal
8faac24a86 New module 'scheme' for calculating dU increment.
This new module contains subroutines to calculate state vector update
using an approximate Riemann solver. The initial version contains only
a draft subroutine calculating the state vector increment dU.
2008-12-08 19:07:42 -06:00
Grzegorz Kowal
e0f211743b Runge-Kutta 2nd order time integration implemented.
In addition, I've done some fixes to the problem initialization, and I
defined new variables igrids, jgrids, kgrids, which specify the
dimensions of the block.
2008-12-08 16:21:59 -06:00
Grzegorz Kowal
b8d3c791a1 Store all variables in one array U.
One array U, which is a field in the BLOCK structure, stores all
variables. The number of variables is determined by the nvars parameter.
To access each variable we use variable index now, like idn, imx, imy,
mz, ien, etc.
2008-12-08 12:14:13 -06:00
Grzegorz Kowal
b331a539b5 New module 'evolution' for time integration.
A new module for the time integration has been added. This module
contains a set of subroutines to perform one step time integration of
each leaf block using 2nd order Runge-Kutta method. More methods can be
added later. Time 't', timestep 'dt' and iteration 'n' have been moved
to this module as well.
2008-12-07 18:57:08 -06:00