2869 Commits

Author SHA1 Message Date
Grzegorz Kowal
1baa4b452c Use MPI by default on acrux.home. 2009-09-26 12:20:06 -03:00
Grzegorz Kowal
ad09c4b150 Add host file for daphne.iag.usp.br. 2009-09-26 12:16:11 -03:00
Grzegorz Kowal
75e8ec7b0e Merge branch 'mpi'
Conflicts:
	src/config.F90
2009-09-26 12:12:28 -03:00
Grzegorz Kowal
5074834673 Implement MPI auto balancing.
MESH STRUCTURE, MPI

 - implement auto balancing, which resends the blocks to the neighboring
   processors in order to keep the same number of data blocks on each
   processor
2009-09-26 01:51:44 -03:00
Grzegorz Kowal
03568cd974 Fix the initial balancing of the blocks.
MESH STRUCTURE, MPI

 - correct the initial division of the geometry, so each process is
   attached to the same number of blocks; blocks at the top level are
   allowed to be attached to different processors even if they are
   children of the same block
2009-09-26 00:39:41 -03:00
Grzegorz Kowal
ee413ffee6 After moving derefined blocks, set their cpu flags. 2009-09-26 00:14:23 -03:00
Grzegorz Kowal
8173d8e603 Move derefined blocks to the same processor.
MESH STRUCTURE, MPI

 - if the child blocks are selected to be derefined and lay on different
   processors, move them all to the processor of the parent block
2009-09-26 00:10:32 -03:00
Grzegorz Kowal
2a25d369c1 Improve the initial block division between processes.
BLOCK STRUCTURE, MPI

 - in the best division calculation use the number of data blocs only,
   but if the blocks are at the top level and have the same parent
   pack then together in the same process
2009-09-25 20:35:38 -03:00
Grzegorz Kowal
8104ee5775 Keep data blocks of leafs only.
MESH STRUCTURES

 - since blocks which are not leafs are not used to calculate the fields
   evolution, we don't need to keep their data blocks allocated; these
   changes allocated the parent data block only for blocks which are
   derefined, and deallocates data block of a block selected for
   refinement; in the case of many blocks and many refinement levels
   these modifications safe a substantial amount of memory
2009-09-25 19:24:02 -03:00
Grzegorz Kowal
449975d9e7 Move bounds of the block to meta structure.
MESH STRUCTURE

 - move the bounds of the block (xmin, xmax, ...) from data to the meta
   structure; in this case we can deallocate data blocks without losing
   the information about it bounds
2009-09-25 11:41:19 -03:00
Grzegorz Kowal
06188d42fa Clean up and comments in init_mesh() and refine_mesh(). 2009-09-22 19:32:57 -03:00
Grzegorz Kowal
24da24004f Improve "blast" problem, add initial domain generation.
PROBLEMS

 - implement the blast problem in a similar way to the one implemented
   in GODUNOV code

 - generate initial domain of 2x3 blocks at the lowest level; it should
   be generalized to any dimension of blocks

MESH STRUCTURE

 - add new variable rdims(1:3) specifying dimensions of the initial
   blocks structure at lowest level

 - improve calculation of the effective resolution, and spatial
   increments
2009-09-22 17:30:53 -03:00
Grzegorz Kowal
6198a0f5c5 Remove old block structures and related code. 2009-09-21 19:02:05 -03:00
Grzegorz Kowal
1ebcd50803 Fix nleafs counting and store it in HDF5 files. 2009-09-21 17:16:29 -03:00
Grzegorz Kowal
0924f90bd7 Implement mesh update uniprocessors and MPI-friendly.
MESH STRUCTURE

 - subroutine mesh_update() performs refinement/derefinement of blocks
   using new structures of meta and data blocks; it is implemented for
   to be MPI friendly as well

 - update subroutine derefine_block() to use new structures of meta and
   data blocks

 - update subroutines restrict_block() and prolong_block() to use new
   meta and data structures
2009-09-21 16:57:34 -03:00
Grzegorz Kowal
4af8724e19 One fix more for the boundaries. 2009-09-21 01:15:08 -03:00
Grzegorz Kowal
25d035fd68 Fix prolongation of the boundary conditions.
BOUNDARY CONDITIONS

 - fix prolongation of the boundary conditions
 - use the same subroutines for the uniprocessor and MPI version
   boundaries
2009-09-21 01:12:32 -03:00
Grzegorz Kowal
f399274574 Finish implementing proper handling of boundary conditions for MPI.
BOUNDARY CONDITIONS

 - using a list of info blocks the implementation of boundary conditions
   for neighbor blocks laying on different processors is successfully
   completed; it is simpler and more clear than the previous
   implementation
2009-09-18 20:34:23 -03:00
Grzegorz Kowal
f8f6ea29a1 Continue work on the boundaries.
BOUNDARY CONDITIONS

 - blocks belonging to the same process can exchange boundaries now;
   blocks with neighbors from other processes still need to be updated

BLOCK STRUCTURE

 - a new structure block_info has been added which will be used for
   defining lists of blocks which have neighbors belonging to other
   processes; these lists will be later used to exchange block data
2009-09-18 17:43:48 -03:00
Grzegorz Kowal
ac188d32e4 Divide blocks among processors. Initial work on boundaries.
BLOCK STRUCTURE: MPI

 - divide initial block structure between all processors and remove
   non-local data blocks

DATA I/O

 - dump the total number of blocks and the number of data blocks

BOUNDARY CONDITIONS

 - initial work on the boundary condition for new structure of blocks
2009-09-14 19:15:21 -03:00
Grzegorz Kowal
bf5e332ce1 Evolve new list of data blocks.
TIME INTEGRATION

 - now update the solution using new list of data blocks belonging to
   the current process only
2009-09-14 18:28:17 -03:00
Grzegorz Kowal
7aff1ecd90 Remove junk comments from subroutine init_mesh(). 2009-09-14 18:19:27 -03:00
Grzegorz Kowal
71d334aedb Make the subroutine refine_block() works correctly.
MESH STRUCTURE

 - with new meta and data block structures the refinement of a block
   works correctly now
 - increase and decrease the number of blocks while allocated and
   dealocated, respectively
2009-09-13 22:58:55 -03:00
Grzegorz Kowal
7827e60f30 Rework subroutine refine_block() to use new meta and data blocks. 2009-09-11 21:52:18 -03:00
Grzegorz Kowal
900b8ec7e0 Initiate neighbors of the root block. 2009-09-10 18:23:18 -03:00
Grzegorz Kowal
202ace1438 Add a few helper subroutines to set different block fields.
MESH STRUCTURE

 - add subroutines metablock_setleaf(), metablock_unsetleaf() to set and
   unset the leaf flag of a meta block
 - add subroutine metablock_setconfig() to set the config of a block
 - add subroutine metablock_setlevel() to set the refinement level of a
   block
2009-09-10 18:15:30 -03:00
Grzegorz Kowal
a3cd1e38a7 Store data from a list of data blocks.
DATA I/O

 - instead of blocks from old list, store data from a list of data
   blocks
2009-09-10 17:59:51 -03:00
Grzegorz Kowal
df071923c9 Add subroutine to associate meta and data blocks and to set bounds.
MESH STRUCTURE

 - add subroutine associate_blocks() to associate a pair of meta and
   data blocks
 - add subroutine datablock_setbounds() to set the geometry bounds for a
   given block
2009-09-10 17:46:36 -03:00
Grzegorz Kowal
10cef69f0c Add subroutines to allocate meta and data blocks.
MESH STRUCTURE

 - add subroutines allocate_metablock() and allocate_datablock()
   allocating new meta and data blocks in memory
 - add subroutines append_metablock() and append_datablock() appending
   the allocated block to the meta and data block lists
 - create root meta and data blocks in domain_default()
 - create two new pointers last_meta nad last_data pointing to the last
   blocks in the meta and data block lists
2009-09-10 17:25:28 -03:00
Grzegorz Kowal
a7a71c3c53 Add initialization and termination of the meta and data blocks.
MESH STRUCTURE

 - initialize meta and data block lists in init_blocks()
 - deallocate and nullify meta and data block lists in clear_blocks()
 - new subtourine deallocate_metablock() for meta block deallocation
 - new subroutine deallocate_datablock() for data block deallocation
2009-09-10 16:18:59 -03:00
Grzegorz Kowal
95f4a2a3ad Create new structures for block meta and data info.
MESH STRUCTURE

 - two types of structures are created:
     block_meta - structure storing metadata of each block
     block_data - structure storing geometry and variable information

 - two lists corresponding to stored objects of block_meta and
   block_data:
     list_meta - the list of objects of block_meta type is shared and
                 synchronized between processors
     list_data - the list of objects of block_data type is private for
                 each processors

 - corresponding pointers to both structures are created too
2009-09-09 16:49:28 -03:00
Grzegorz Kowal
66b3cfc176 Extend bounds for criterion calculation and remove junk.
REFINEMENT

 - move the bounds for the criterion calculation by one cell out of the
   domain, so we catch the refinement before it gradients enter the
   block

MAINTANCE

 - remove junk code from the update_mesh subroutine
2009-09-09 14:21:46 -03:00
Grzegorz Kowal
0f34a18cf6 Calculate number of blocks more precisely. 2009-08-22 13:51:40 -03:00
Grzegorz Kowal
3c74c9a8ea More work on the block refinement.
The initial block mesh refinement seems to be optimized now. However,
more work in the MPI version is required, e.g. refine blocks on all CPUs
until the number of blocks exceeds some number dependent on the number
of CPUs. From that point refine independently on each CPU.

The initial work on the mesh update has been done too, but the processes
do not exchange block information.
2009-05-18 22:46:19 +02:00
Grzegorz Kowal
1b718dea86 Selecting blocks for refinement/derefinement with MPI.
This next approach to select blocks for refinement/derefinement with MPI
seems to be working. Moreover, it is simple and efficient, however,
there might be still some room for improvements.
2009-01-08 00:08:52 -06:00
Grzegorz Kowal
c216adb422 First approach to implement MPI refinement.
It seems to be pretty complex. So far I collect blocks selected for
refinement which have neighbors laying on other processors. Using this I
set the neighbors from other processors for refinement if required. The
remaining things are the neighbors update after refinement and MPI
version of derefinement.
2009-01-03 22:49:04 -06:00
Grzegorz Kowal
c686e301fc Fix boundary update and reduce the next time step over all CPUs.
The boundary conditions finally works when using MPI with arbitrary
number of processors. This is however a dirty hack. Has to be done in a
better way later, since now it may be very tricky to generalize the code
to 3D.

The next time step should be reduced to the minimum value over all
processes. This has been added in subroutine 'evolution' now.

The block structure contains a new field %pos which specifies the
position of the child block in its parent.
2009-01-02 20:18:57 -06:00
Grzegorz Kowal
53d1a2e23d A bit better approach to update boundaries with MPI.
In this version first we determine the number of blocks to exchange for
each process, then we update this across all processes. Using this
information we send the IDs and level of blocks and their data to the
neighboring processes.
2009-01-01 23:17:49 -06:00
Grzegorz Kowal
240f63d7b1 Reorganize and optimize block exchange in boundary update.
In this version we send each block only once. This block is used to
update boundaries of all neighbors. We use three communications between
each block: the number of blocks to send, the IDs and levels of blocks
which are send, and the %u field of all blocks which are send packed as
one big buffer 'rbuf'.
2008-12-31 16:07:23 -06:00
Grzegorz Kowal
3f6321cfe0 Add initial velocity configuration for binaries problem. 2008-12-31 12:57:31 -06:00
Grzegorz Kowal
6c620a97cf Implement boundary block exchange between processes.
Now, in the boundary subroutine we collect information about the blocks
which neighbors lay on a different processor. Using this information we
send neighboring blocks and update the block boundaries. This is the
initial implementation, not very well optimized and containg some bugs.
2008-12-31 12:02:36 -06:00
Grzegorz Kowal
398ffd385c Prepare subroutine 'boundary' in order to support MPI.
There are several conditions for the boundaries. If the neighbor block
is on the same processor do as usuall. If it lays on a different
processor, prepare it for exchange. After echanging blocks we need to
perform similar update as it would lay on the same processor, i.e. we
need to perform copying, restriction, or prolongation.
2008-12-29 07:36:06 -06:00
Grzegorz Kowal
90182faf2b Merge branch 'mpi' of /home/kowal/Web/godunov-amr into mpi 2008-12-28 17:28:16 -06:00
Grzegorz Kowal
20837eb739 Divide the initial blocks between processes.
The blocks of initial mesh are now divided between all processes. The
procedure updates also the %cpu field of all neighbors according to the
number of process they are running at.

The subroutine 'deallocate_block' takes care about the plist pointer
now. It also allows for the flexible removal of any block, even from the
middle of the list by updating fields %next and %prev of the previous
and next blocks, respectively.

In addition, we now store the %cpu field of each block in the HDF5 files
too.
2008-12-28 13:09:14 -06:00
Grzegorz Kowal
82843c43f4 Add TODO about the way of refinement with MPI. 2008-12-23 11:46:26 -06:00
Grzegorz Kowal
a07f970b5d Add TODO about the way of refinement with MPI. 2008-12-22 16:15:27 -06:00
Grzegorz Kowal
2719b0b451 Merge branch 'master' into mpi
Conflicts:

	src/mesh.F90
2008-12-22 16:08:51 -06:00
Grzegorz Kowal
0626f616cb Add problem specific domain initialization.
The domain initialization is performed in the problem.F90 now. This will
allow a user to initialize his own domain according to the defined
problem. Since the initialization in problem-dependent we don't need the
subroutine allocate_blocks anymore.
2008-12-22 15:34:02 -06:00
Grzegorz Kowal
5601bd6081 Add printing info about the MPI enabled. 2008-12-22 15:09:05 -06:00
Grzegorz Kowal
1f581e9610 Add new module for MPI handling.
A new 'mpitools' module has been added. This module contains subroutines
to initialize, deinitialize and handle MPI communication. This initial
version can now initialize and deinitialize parallelization. It does not
support full parallelization yet.
2008-12-22 14:57:31 -06:00