Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Add info about adaptivity tuning parameters #131

Merged
merged 6 commits into from
Oct 10, 2024
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
1 change: 1 addition & 0 deletions CHANGELOG.md
Original file line number Diff line number Diff line change
Expand Up @@ -2,6 +2,7 @@

## latest

- Add information about adaptivity tuning parameters https://github.com/precice/micro-manager/pull/131
- Put computation of counting active steps inside the adaptivity variant `if` condition https://github.com/precice/micro-manager/pull/130

## v0.5.0
Expand Down
10 changes: 8 additions & 2 deletions docs/configuration.md
Original file line number Diff line number Diff line change
Expand Up @@ -116,11 +116,17 @@ Parameter | Description
`type` | Set to either `local` or `global`. The type of adaptivity matters when the Micro Manager is run in parallel. `local` means comparing micro simulations within a local partitioned domain for similarity. `global` means comparing micro simulations from all partitions, so over the entire domain.
`data` | List of names of data which are to be used to calculate if micro-simulations are similar or not. For example `["temperature", "porosity"]`.
`history_param` | History parameter $$ \Lambda $$, set as $$ \Lambda >= 0 $$.
`coarsening_constant` | Coarsening constant $$ C_c $$, set as $$ C_c < 1 $$.
`refining_constant` | Refining constant $$ C_r $$, set as $$ C_r >= 0 $$.
`coarsening_constant` | Coarsening constant $$ C_c $$, set as $$ 0 =< C_c < 1 $$.
`refining_constant` | Refining constant $$ C_r $$, set as $$ 0 =< C_r < 1 $$.
`every_implicit_iteration` | If True, adaptivity is calculated in every implicit iteration. <br> If False, adaptivity is calculated once at the start of the time window and then reused in every implicit time iteration.
`similarity_measure`| Similarity measure to be used for adaptivity. Can be either `L1`, `L2`, `L1rel` or `L2rel`. By default, `L1` is used. The `rel` variants calculate the respective relative norms. This parameter is *optional*.

The primary tuning parameters for adaptivity are the history parameter $$ \Lambda $$, the coarsening constant $$ C_c $$, and the refining constant $$ C_r $$. Their effects can be interpreted as:

- Higher values of the history parameter $$ \Lambda $$ imply lower significance of the similarity measures in the previous timestep on the similarity measure and thus adaptivity state in the current timestep.
- Higher values of the coarsening constant $$ C_c $$ imply that more active simulations from the previous timestep will remain active in the current timestep.
- Higher values of the refining constant $$ C_r $$ imply that less inactive points from the previous timestep will become active in the current timestep.
mathiskelm marked this conversation as resolved.
Show resolved Hide resolved

Example of adaptivity configuration is

```json
Expand Down