43 Commits

Author SHA1 Message Date
Grzegorz Kowal
c65688aaf4 EVOLUTION, GLM-MHD: update Psi after all RK substeps.
- the scalar potential is updated after each Runge-Kutta substep now;
2010-12-01 15:23:27 -02:00
Grzegorz Kowal
c08cf9e98b EVOLUTION: remove unused subroutines.
- remove subroutine advance(); this subroutine is good for GALERKIN
   approach; it's gonna be reintroduced during this method
   implementation;
 - remove subroutine update_flux(); this subroutine is called from
   advance() and is not used in the GLM-MHD approach;
 - make GNU Fortran happy bu adding () to the subroutine calls;
2010-12-01 15:14:07 -02:00
Grzegorz Kowal
c220a578b0 SCHEME: move cmax parameter from module 'evolution' to 'scheme'. 2010-12-01 13:13:27 -02:00
Grzegorz Kowal
9eef6af9b4 GLM-MHD: implement Psi decay due to a source term.
- the scalar potential Psi evolution is controlled by a dissipative
   source term; include this update using a simple analytical solution;
2010-12-01 11:20:25 -02:00
Grzegorz Kowal
87b8225b55 GLM-MHD: take into account the maximum speed in update.
- the divergence of B propagates with the maximum speed c_h; take it
   into account while updating the solution for the scalar potential
   Psi;
2010-12-01 10:57:40 -02:00
Grzegorz Kowal
166adf0e64 GLM-MHD: update global solution.
- update the magnetic field components and scalar potential in the
   Euler and RK2 methods, evolve_euler() and evolve_rk2() subroutines,
   respectively;
2010-12-01 10:53:21 -02:00
Grzegorz Kowal
71d20ced15 EVOLUTION: add subroutine to update the maximum speed.
- make the variable cmax global in the module 'evolution';
 - add a new subroutine update_maximum_speed() which updates the maximum
   speed cmax in the system iterating over all data blocks;
 - use the subroutine update_maximum_speed() in evolve();
2010-12-01 10:39:18 -02:00
Grzegorz Kowal
226c2a3fe9 Move variable indices to new module 'variables'.
VARIABLES

 - create new module 'variables' which stores references to variable
   indices; we gonna store dofferent objects related to variables in
   this module;
2010-12-01 09:25:30 -02:00
Grzegorz Kowal
aab8831bbf Update the copyright info and fix the length of separating lines. 2010-10-13 03:32:10 -03:00
Grzegorz Kowal
96ed98a636 Change the boundary subroutines names.
BOUNDARIES

 - change the name of subroutine boundary() to boundary_variables();
 - change the name of subroutine boundary_flux() to boundary_fluxes();
2010-09-19 14:33:57 +02:00
Grzegorz Kowal
7b48386610 Remove flux calculation from the subroutine update().
SCHEME

 - remove the flux calculation from the subroutine update() and all dependent
   subroutines;
2010-09-19 00:08:20 +02:00
Grzegorz Kowal
ed88bc2526 First step of implementation of time advance using new method.
BLOCK STRUCTURE

 - add new array to the data block structure to store the electromotive
   force components; these components are located at the centers of cell
   edges, in this way the CT update of the staggered magnetic field
   component will be easier;

EVOLUTION

 - add new subroutine advance() which performs several steps in order to
   advance the solution in time by one-step update; the substeps are the
   updates of the numerical flux, the flux boundary, the advance in time
   the solution, the updates of mesh structure, and the boundaries of
   conserved variables, and finally the new time step estimation;

 - add new subroutine update_flux() to update the numerical fluxes
   stored in the data blocks;

SCHEME

 - add new subroutine numerical_flux() to calculate fluxes at the proper
   locations;

 - add new logical argument to HLL and HLLC in order to specify if the
   flux should be returned as a numerical flux or its derivative;
2010-07-27 19:26:15 -03:00
Grzegorz Kowal
0b62ce8c35 Update the array of fluxes and send it out of the update.
EVOLUTION

 - add new output argument returning the updated fluxes for each block;

SCHEME

 - update the fluxes from the one dimensional fluxes returned by the
   Riemann solver;
2010-07-26 21:12:40 -03:00
Grzegorz Kowal
9c861ff96d Use TVD interpolation in boundary update, update mesh before time step.
BOUNDARY CONDITIONS

  - use TVD interpolation for prolongation of the boundary conditions

CONFIGURATION

  - put lower limit for the number of ghost and domain cells

HOST FILES

  - host files should not be included in the revision control

INTERPOLATION

  - define all arrays as REAL, not REAL(KIND=8) since the precision of
    calculations is determined at the compilation stage

  - replace j0 and j1 indices with new more obvious il and ir

MESH

  - update mesh before calculating new time step
2010-03-14 15:40:24 -03:00
Grzegorz Kowal
88761c9622 Implement FLUXCT integration of the induction equation.
SCHEME

  - implement Flux-CT scheme for the staggered magnetic field
    integration;

BLOCK STRUCTURE

  - use more space efficient storage of the variables, which means
    storing only staggered components of magnetic field; cell-centered
    components are calculated only when necessary;

EVOLUTION

  - remove loops in the field updates; operations are performed on the
    arrays;

BOUNDARY CONDITIONS

  - remove loops in the bnd_copy; operations are calculated on the whole
    array now;

INTERPOLATION

  - subroutine magtocen() has been rewritten to avoid problems with the
    array allocation; now as an argument we enter the array of all
    variables; subroutine uses indices for the face-centered and
    cell-centered magnetic field components internally;

MAKE

  - add flag defining Flux-CT scheme;

PROBLEM

  - use predefined array variables instead of allocated;
2010-02-28 18:35:57 -03:00
Grzegorz Kowal
98849f3369 Prepare code for central differencing scheme for magnetic field. 2010-02-22 12:01:58 -03:00
Grzegorz Kowal
bbf998e4a0 Boundaries, interpolation, indices.
Rewritten boundaries allow for a proper handling boundaries between
blocks at different refinement levels. Prolongation and restriction of
the boundaries are improved now.

Rewritten interpolation for prolongation and restriction.

References to the variable indices are assigned more properly.
2010-02-11 23:30:46 -02:00
Grzegorz Kowal
beee9ee2d4 Initial support for the MHD equations.
VARIABLES

 - add indices for the magnetic field components, both face and cell
   centered

SOLVER, MHD

 - add support for the magnetohydrodynamic (MHD) equations to the
   subroutines cons2prim(), prim2cons()

 - add MHD flux and the fastest speed calculation in the subroutine
   fluxspeed()

 - include magnetosonic speed in the calculation of the maximum speed in
   the system required for estimation of the new time step

 - extend the HLL solver in subroutine hll() to support MHD

 - calculate the magnetic field update according to a CT scheme in the
   subroutine update()

INTERPOLATION

 - add subroutine magtocen() to interpolate cell centered magnetic field

EVOLUTION

 - add evolution of the magnetic field components in the evolve_euler()
   and evolve_rk2() time integration subroutines

 - also call the subroutine magtocen() in the right places

BOUNDARY CONDITIONS

 - support for magnetic field boundary update only in the case of blocks
   with the same level so far; later we need to include proper
   restriction and prolongation for the magnetic field to keep its
   divergence equal zero

PROBLEMS, BLAST

 - extend the blast problem to include the initial magnetic field

IO, HDF5

 - write down in a HDF5 file magnetic field components
2009-10-28 00:12:18 -02:00
Grzegorz Kowal
0924f90bd7 Implement mesh update uniprocessors and MPI-friendly.
MESH STRUCTURE

 - subroutine mesh_update() performs refinement/derefinement of blocks
   using new structures of meta and data blocks; it is implemented for
   to be MPI friendly as well

 - update subroutine derefine_block() to use new structures of meta and
   data blocks

 - update subroutines restrict_block() and prolong_block() to use new
   meta and data structures
2009-09-21 16:57:34 -03:00
Grzegorz Kowal
25d035fd68 Fix prolongation of the boundary conditions.
BOUNDARY CONDITIONS

 - fix prolongation of the boundary conditions
 - use the same subroutines for the uniprocessor and MPI version
   boundaries
2009-09-21 01:12:32 -03:00
Grzegorz Kowal
f399274574 Finish implementing proper handling of boundary conditions for MPI.
BOUNDARY CONDITIONS

 - using a list of info blocks the implementation of boundary conditions
   for neighbor blocks laying on different processors is successfully
   completed; it is simpler and more clear than the previous
   implementation
2009-09-18 20:34:23 -03:00
Grzegorz Kowal
ac188d32e4 Divide blocks among processors. Initial work on boundaries.
BLOCK STRUCTURE: MPI

 - divide initial block structure between all processors and remove
   non-local data blocks

DATA I/O

 - dump the total number of blocks and the number of data blocks

BOUNDARY CONDITIONS

 - initial work on the boundary condition for new structure of blocks
2009-09-14 19:15:21 -03:00
Grzegorz Kowal
bf5e332ce1 Evolve new list of data blocks.
TIME INTEGRATION

 - now update the solution using new list of data blocks belonging to
   the current process only
2009-09-14 18:28:17 -03:00
Grzegorz Kowal
7827e60f30 Rework subroutine refine_block() to use new meta and data blocks. 2009-09-11 21:52:18 -03:00
Grzegorz Kowal
c216adb422 First approach to implement MPI refinement.
It seems to be pretty complex. So far I collect blocks selected for
refinement which have neighbors laying on other processors. Using this I
set the neighbors from other processors for refinement if required. The
remaining things are the neighbors update after refinement and MPI
version of derefinement.
2009-01-03 22:49:04 -06:00
Grzegorz Kowal
c686e301fc Fix boundary update and reduce the next time step over all CPUs.
The boundary conditions finally works when using MPI with arbitrary
number of processors. This is however a dirty hack. Has to be done in a
better way later, since now it may be very tricky to generalize the code
to 3D.

The next time step should be reduced to the minimum value over all
processes. This has been added in subroutine 'evolution' now.

The block structure contains a new field %pos which specifies the
position of the child block in its parent.
2009-01-02 20:18:57 -06:00
Grzegorz Kowal
6c620a97cf Implement boundary block exchange between processes.
Now, in the boundary subroutine we collect information about the blocks
which neighbors lay on a different processor. Using this information we
send neighboring blocks and update the block boundaries. This is the
initial implementation, not very well optimized and containg some bugs.
2008-12-31 12:02:36 -06:00
Grzegorz Kowal
1aa75ac2ef Change the leaf field of the block to be logical.
This change should speed up a bit the conditions, since it is faster
than the character comparisons.
2008-12-19 00:02:05 -06:00
Grzegorz Kowal
1206194deb Add support for the shapes in the domain.
Now, we can define any shape for a given problem inside the domain,
which is not updated during the evolution. This allows for using the
sources of any kind in the problem studies, such as the colliding winds
in the binary stars.
2008-12-18 12:18:36 -06:00
Grzegorz Kowal
e56aee34c0 Play more with the refinements criterion. 2008-12-16 13:40:34 -06:00
Grzegorz Kowal
f673596f20 Refinement based on density. More cleanups. 2008-12-13 22:41:37 -06:00
Grzegorz Kowal
b39673bf85 Fixed generation of dx, dy, and dz.
The generation of dx, dy, dz for each level didn't include the base
grid, i.e. they were 2 times larger. Now it is fixed. Also plenty of
minor corrections in different places.
2008-12-13 21:05:51 -06:00
Grzegorz Kowal
7bc74a39d9 Initial version of refinement/derefinement is working.
It looks like the refining and derefining work more or less, at least
without interrupting the execution. Nevertheless, there are still some
artifacts, like the lack of symmetry after some time or not efficient
derefining of the mesh. This could be cause by the solver, however.

The refinement criterion is computed using pressure now.
2008-12-13 15:08:18 -06:00
Grzegorz Kowal
d77dc8079e A few bugs fixed. Some reorganization of dimension variables. 2008-12-09 22:28:29 -06:00
Grzegorz Kowal
71a602156a New time integration method EULER. 2008-12-09 21:06:21 -06:00
Grzegorz Kowal
936420c599 New module to handle the boundary conditions.
The subroutine 'boundary' sweeps over all leaf blocks. For each block it
sweeps over its neighbors and performs update of the boundaries. This is
an initial version yet, it supports only neighboring blocks of the same
level of refinement.
2008-12-09 20:37:31 -06:00
Grzegorz Kowal
6d38fe6476 Added CFL condition and calculation of the new time step.
The function to calculate the maximum speed in the block has been added.
This function is used to determine the maximum speed globally, which is
next used to estimate the next time step.
2008-12-09 14:51:33 -06:00
Grzegorz Kowal
7deada5a90 Updated comments in the evolution module. 2008-12-08 21:07:10 -06:00
Grzegorz Kowal
c48f30bd20 Update sweeps over direction calculating dU.
We compute dFdx contribution along each direction and update total dU.
Apart from that I've added generation of coordinate variables, like dx,
dxi, etc. in the mesh module. The next step is to implement the HLL
solver.
2008-12-08 20:03:01 -06:00
Grzegorz Kowal
8faac24a86 New module 'scheme' for calculating dU increment.
This new module contains subroutines to calculate state vector update
using an approximate Riemann solver. The initial version contains only
a draft subroutine calculating the state vector increment dU.
2008-12-08 19:07:42 -06:00
Grzegorz Kowal
e0f211743b Runge-Kutta 2nd order time integration implemented.
In addition, I've done some fixes to the problem initialization, and I
defined new variables igrids, jgrids, kgrids, which specify the
dimensions of the block.
2008-12-08 16:21:59 -06:00
Grzegorz Kowal
b8d3c791a1 Store all variables in one array U.
One array U, which is a field in the BLOCK structure, stores all
variables. The number of variables is determined by the nvars parameter.
To access each variable we use variable index now, like idn, imx, imy,
mz, ien, etc.
2008-12-08 12:14:13 -06:00
Grzegorz Kowal
b331a539b5 New module 'evolution' for time integration.
A new module for the time integration has been added. This module
contains a set of subroutines to perform one step time integration of
each leaf block using 2nd order Runge-Kutta method. More methods can be
added later. Time 't', timestep 'dt' and iteration 'n' have been moved
to this module as well.
2008-12-07 18:57:08 -06:00