Update README.md

Signed-off-by: Grzegorz Kowal <grzegorz@amuncode.org>
This commit is contained in:
Grzegorz Kowal 2021-12-01 21:36:29 -03:00
parent c85831a386
commit 1664a37e10

View File

@ -12,23 +12,26 @@ following features are already implemented:
* hydrodynamic and magnetohydrodynamic set of equations (HD and MHD),
* both classical and special relativity cases for the above equations,
* Cartesian coordinate system,
* Cartesian coordinate system so far,
* uniform and adaptive mesh generation and update,
* 2nd to 4th order time integration using Strong Stability Preserving
Runge-Kutta methods,
* 2nd order TVD interpolation with number of limiters and higher order
reconstructions,
* a number of time integration methods, from 2nd to 5th order Runge-Kutta
methods: Strong Stability Preserving and Embedded (with the error control),
* high order reconstructions: from 2nd to 9th order WENO and MP, both explicit
and compact methods, the 2nd order TVD interpolation has a number of limiters
supported,
* Riemann solvers of Roe- and HLL-types (HLL, HLLC, and HLLD),
* standard boundary conditions: periodic, open, reflective, hydrostatic, etc.
* turbulence driving using Alvelius or OrnsteinUhlenbeck methods,
* viscous and resistive source terms,
* support for passive scalars (up to 100),
* data stored in internal XML+binary or HDF5 format,
* support for Zstandard, LZ4, and LZMA compression in XML+binary format,
* Python interface to read snapshots in both formats,
* support for passive scalars,
* data stored in an internal XML+binary or the HDF5 format,
* data integrity of the XML+binary format guaranteed by the XXH64 or XXH3 hashes;
* support for Zstandard, LZ4, and LZMA compressions in the XML+binary format,
* support for Deflate, Zstandard, and ZFP compressions in the HDF5 format,
* easy and consistend Python interface to read snapshots in both formats,
* MPI parallelization,
* completely written in Fortran 2008,
* simple Makefile or CMake for executable building,
* simple Makefile or CMake for building the code executable,
* minimum requirements, only Fortran compiler and Python are required to
prepare, run, and analyze your simulations.
@ -62,63 +65,58 @@ Requirements
compiler version 9.0 or newer.
- [NVIDIA HPC](https://developer.nvidia.com/hpc-sdk) compiler version 21.9.
Warning: I could not make it run with the included MPI libraries.
* Optional, but recommended, [OpenMPI](https://www.open-mpi.org/) for parallel
runs, tested with version 1.8 or newer.
* Optional support for XML-binary format compression requires:
* Recommended, although optional, [OpenMPI](https://www.open-mpi.org/) for
parallel runs, tested with version 1.8 or newer.
* Optional [CMake](https://cmake.org) version 3.16 or newer, for advanced
compilation option selection.
* Optionally, the XML-binary format compression requires:
[LZ4 library](https://lz4.github.io),
[Zstandard library](http://facebook.github.io/zstd/), or
[LZMA library](https://tukaani.org/xz/)
[XXHASH library](http://www.xxhash.com/).
* Optional [HDF5 libraries](https://www.hdfgroup.org/solutions/hdf5/), tested
with version 1.10 or newer. The code now uses the new XML-binary snapshot
format. However, if you still want to use older HDF5 snapshot format, you
will need these libraries.
* Optional [CMake](https://cmake.org) version 3.16 or newer, for managing the
build process.
Environment Variables
=====================
If you need to use the HDF5 libraries and they are not installed in the default
location, i.e. in the system directory **/usr**, make sure that the environment
variable _HDF5DIR_ is set in your **~/.bashrc** (or **~/.cshrc**) and pointing
to the location where the HDF5 libraries have been installed.
* Deflate compression is natively supported in HDF5 libraries, however,
optionally these compression formats are supported through filters:
[HDF5Plugin-Zstandard](https://github.com/gkowal/HDF5Plugin-Zstandard),
[H5Z-ZFP](https://github.com/LLNL/H5Z-ZFP).
Recommended compilation (using CMake)
=====================================
1. Clone the AMUN source code:
- from Bitbucket:
`git clone https://grzegorz_kowal@bitbucket.org/amunteam/amun-code.git`,
- from GitLab:
`git clone https://gitlab.com/gkowal/amun-code.git`
- from Bitbucket:
`git clone https://grzegorz_kowal@bitbucket.org/amunteam/amun-code.git`,
- or unpack the archive downloaded from page
[Downloads](https://bitbucket.org/amunteam/amun-code/downloads/).
2. Create a directory for compilation in any location,
e.g. `mkdir cmake-build && cmake-build`.
2. Create the build directory, e.g. `mkdir amun-build && cd amun-build`.
3. Call `ccmake <path to amun-code>`, e.g. `ccmake ..`, and press 'c' once.
Configure available options. Press 'c' once again, and 'g' to
generate makefiles. Alternatively, just call `ccmake <path to amun-code>`
for default options.
Set available options, if necessary. Press 'c' once again, and 'g' to
generate makefiles.
4. Compile the code using `make`. The executable file **amun.x** should be
created.
available in a few moments.
Alternative compilation (using `make.conf`)
Alternative compilation (using `make`)
===========================================
1. Clone the AMUN source code:
- from Bitbucket:
`git clone https://grzegorz_kowal@bitbucket.org/amunteam/amun-code.git`,
- from GitLab:
`git clone https://gitlab.com/gkowal/amun-code.git`
- from Bitbucket:
`git clone https://grzegorz_kowal@bitbucket.org/amunteam/amun-code.git`,
- or unpack the archive downloaded from page
[Downloads](https://bitbucket.org/amunteam/amun-code/downloads/).
2. Go to directory **build/hosts/** and copy file **default** to a new file named
exactly as your host name, i.e. `cp default $HOSTNAME`.
2. Go to directory **build/hosts/** and copy file **default** to a new file
named exactly as your host name, i.e. `cp default $HOSTNAME`.
3. Customize your compiler and compilation options in your new host file.
4. Go up to directory **build/** and copy file **make.default** to **make.config**.
4. Go up to the directory **build/** and copy file **make.default** to
**make.config**.
5. Customize compilation time options in **make.config**.
6. Compile sources by typing `make` in directory **build/**. The executable file
**amun.x** should be created there.
@ -147,23 +145,24 @@ where N is the number of processors to use.
Reading data
============
By default, the code uses new XML+binary snapshot data format. It can also be
forced by setting parameter **snapshot_format** to **xml**.
By default, the code uses the new XML+binary snapshot data format. Parameter
**snapshot_format** set to either **xml** or **h5** controls which file format
is used.
In order to read produced data in this format, you will need to install the
provided Python module. Simply change to **python/** directory and run
`python setup.py install --user`
In order to read the data produced in this format, you will need to install the
Python module AmunPy included in subdirectory **python/amunpy**. Simply go to
this directory and run
`python ./setup.py install --user`
to install the module in your home directory.
Import the module in your python script using
`from amunpy import *`,
and then initiate the interface using
and then initiate the interface to the XML+binary snapshots using
`snapshot = AmunXML(<path to the snapshot directory>)`
and read desired variable using
or to the HDF5 files using
`snapshot = AmunH5(<path to any HDF5 snapshot file>)`
and read desired variables using function
`var = snapshot.dataset(<variable>)`.
The function **dataset()** returns rescaled uniform mesh variable as NumPy
array.
If you want to read data from HDF5 snapshot, just use
`var = amun_dataset(<snapshot HDF5 file>, <variable>)`.
The function **dataset()** returns the requested variable mapped on the uniform
mesh as a NumPy array.