Skip to content

Conversation

pdziekan
Copy link
Contributor

@pdziekan pdziekan commented May 5, 2022

two grids: normal and second, finer grid
condensation done on refined grid, everything else on normal grid

Abandoned for now.
Instead for grid refinement in UWLCM, entire microphysics are done on refined grid (hopefully without changes to libcloudph++).

input arrays that are on both grids (refined input needed only if n_ref>1, i.e. refinement is done):
th, rv, rhod, p

internal arrays on both grids:
dv, eta, count_ijk, count_mom, count_num

internal arrays on refined grid only:
T, RH

internal particle hskpng data required for both grids:
ijk, sorted_ijk

TODO:

  • dx_ref/dy_ref/dz_ref might be smaller than x0/y0/z0 -> trouble e.g. in hskpng_ijk (?)
  • dv refined calculation
  • calculation of moments and output of refined grid
  • multi_CUDA and refinement -> distmem_opts, jaki podzial refined cells pomiedzy rozne GPU?
  • will refinement work with aerosol relaxation and/or sources?
  • calculate ijk ref
  • moms calc on normal and refined grids
  • sync in and out (refined grid input required only if n_ref>1)
  • unit tests
  • condensation code

currently working on the non-MPI version, things that will need to be fixed for MPI:

  • libmpdata++ evenly divides refined cells between MPI domains
  • refined cell between domains is cut in half by the mpi domain, some averaging needed when updating th/rv after condensation
  • nx_ref

@pdziekan
Copy link
Contributor Author

Obsolete: currently with fractal refinement all microphysics are done on the refined grid, hence no changes in libcloudph++ are needed

@pdziekan pdziekan changed the title grid refinement compatibility [obsolete] grid refinement compatibility Jan 24, 2023
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

1 participant