Module iceflow
This IGM module models ice flow dynamics in 3D using a Convolutional Neural Network (CNN) based on a Physics-Informed Neural Network (PINN), as described in [1]. Specifically, the CNN is trained to minimize the energy associated with high-order ice flow equations during the time iterations of a glacier evolution model. Consequently, it serves as a computationally efficient alternative to traditional solvers, capable of handling diverse ice flow regimes. Check at the IGM technical paper for further details [1].
[1] Concepts and capabilities of the Instructed Glacier Model 3.X.X, Jouvet and al.
Pre-trained emulators are provided by default. However, one may start from scratch by setting processes.iceflow.emulator.name=""
. The key parameters to consider in this case are:
- Physical parameters:
"processes.iceflow.physics.init_slidingco": 0.045 # Init slid. coeff. ($Mpa y^{1/3} m^{-1/3}$)
"processes.iceflow.physics.init_arrhenius": 78.0 # Init Arrhenius cts ($Mpa^{-3} y^{-1}$)
"processes.iceflow.physics.exp_glen": 3 # Glen's exponent
"processes.iceflow.physics.exp_weertman": 3 # Weertman's sliding law exponent
- Numerical parameters for the vertical discretization:
"processes.iceflow.numerics.Nz": 10 # number of vertical layers
"processes.iceflow.numerics.vert_spacing": 4.0 # 1.0 for equal vertical spacing, 4.0 otherwise
Note that in the special case of \(Nz=2\), the ice velocity profile from the bottom to the top of the ice is assumed to vary polynomially following the Shallow Ice Approximation (SIA) formula. In the case of a single layer \(Nz=1\), the ice flow is assumed to be vertically uniform, and the ice flow model reduces to the Shallow Shelf Approximation (SSA).
- Learning rate and frequency of retraining:
While this module was targeted for deep learning emulation, it is possible to use the solver (processes.iceflow.method='solved'
) instead of the default (processes.iceflow.method='emulated'
), or use the two together (processes.iceflow.method='diagnostic'
) to assess the emulator against the solver. The most important parameters for solving are:
One may choose between a 2D Arrhenius factor or a 3D Arrhenius factor by setting the parameter processes.iceflow.dim_arrhenius
to 2
or 3
, respectively. The 3D option is required for the enthalpy model.
When treating very large arrays, retraining must be done sequentially in a patch-wise manner due to memory constraints. The size of the patch is controlled by the parameter processes.iceflow.multiple_window_size=750
.
Contributors: G. Jouvet
Config Structure
iceflow:
method: emulated
force_max_velbar: 0.0
physics:
energy_components: ['shear', 'sliding_weertman', 'gravity']
sliding_law: ''
gravity_cst: 9.81
ice_density: 910.0
init_slidingco: 0.0464
init_arrhenius: 78.0
enhancement_factor: 1.0
exp_glen: 3.0
exp_weertman: 3.0
regu_glen: 1.0e-05
regu_weertman: 1.0e-10
dim_arrhenius: 2
thr_ice_thk: 0.1
min_sr: 1.0e-20
max_sr: 1.0e+20
force_negative_gravitational_energy: false
cf_eswn: []
numerics:
Nz: 10
vert_spacing: 4.0
staggered_grid: 1
vert_basis: "Lagrange"
solver:
step_size: 1.0
nbitmax: 100
stop_if_no_decrease: true
optimizer: Adam
lbfgs: false
save_cost: ''
plot_sol: False
emulator:
fieldin:
- thk
- usurf
- arrhenius
- slidingco
- dX
retrain_freq: 10
lr: 2.0e-05
lr_init: 0.0001
lr_decay: 0.95
warm_up_it: -10000000000.0
nbit_init: 1
nbit: 1
framesizemax: 750
split_patch_method: "sequential"
pretrained: true
name: ''
save_model: false
exclude_borders: 0
optimizer: Adam
optimizer_clipnorm: 1.0
optimizer_epsilon: 1.0e-07
save_cost: ''
output_directory: ''
plot_sol: False
pertubate: false
network:
architecture: cnn
multiple_window_size: 0
activation: LeakyReLU
nb_layers: 16
nb_blocks: 4
nb_out_filter: 32
conv_ker_size: 3
dropout_rate: 0
weight_initialization: glorot_uniform
cnn3d_for_vertical: false
Parameters
Name | Type | Units | Description | Default Value |
---|---|---|---|---|
method | string | \( dimless \) | Type of iceflow: it can be emulated (default), solved, or in diagnostic mode | emulated |
force_max_velbar | float | \( m y^{-1} \) | This permits to artificially upper-bound velocities, active if > 0 | 0.0 |
physics.energy_components | list | \( dimless \) | List of energy components to compute, it can be shear, sliding_weertman, gravity, or all | ['shear', 'sliding_weertman', 'gravity'] |
physics.sliding_law | string | \( dimless \) | Type of sliding law, it can be empty or weertman yet. If you use this, make sure to remove the corresponding energy component in the list above. WARNING: this is not working well yet, do not use it yet. | |
physics.gravity_cst | float | \( m^2 s^{-1} \) | Gravitational constant | 9.81 |
physics.ice_density | float | \( kg m^{-3} \) | Density of ice | 910.0 |
physics.init_slidingco | float | \( Mpa y^m m^{-m} \) | Initial sliding coefficient slidingco | 0.0464 |
physics.init_arrhenius | float | \( Mpa^{-n} y^{-1} \) | Initial arrhenius factor arrhenius | 78.0 |
physics.enhancement_factor | float | \( dimless \) | Enhancement factor multiplying the arrhenius factor | 1.0 |
physics.exp_glen | float | \( dimless \) | Glen's flow law exponent | 3.0 |
physics.exp_weertman | float | \( dimless \) | Weertman's law exponent | 3.0 |
physics.regu_glen | float | \( dimless \) | Regularization parameter for Glen's flow law | 1e-05 |
physics.regu_weertman | float | \( dimless \) | Regularization parameter for Weertman's sliding law | 1e-10 |
physics.dim_arrhenius | integer | \( dimless \) | Dimension of the arrhenius factor (horizontal 2D or 3D) | 2 |
physics.thr_ice_thk | float | \( m \) | Threshold Ice thickness for computing strain rate | 0.1 |
physics.min_sr | float | \( y^{-1} \) | Minimum strain rate | 1e-20 |
physics.max_sr | float | \( y^{-1} \) | Maximum strain rate | 1e+20 |
physics.force_negative_gravitational_energy | boolean | \( dimless \) | Force energy gravitational term to be negative | False |
physics.cf_eswn | list | \( dimless \) | This forces calving front at the border of the domain in the side given in the list | [] |
numerics.Nz | integer | \( dimless \) | Number of grid points for the vertical discretization | 10 |
numerics.vert_spacing | float | \( dimless \) | Parameter controlling the discretization density to get more points near the bed than near the surface. 1.0 means equal vertical spacing. | 4.0 |
numerics.staggered_grid | 1 | |||
numerics.vert_basis | Lagrange | |||
solver.step_size | float | \( dimless \) | Step size for the optimizer used when solving Blatter-Pattyn in solver mode | 1.0 |
solver.nbitmax | integer | \( dimless \) | Maximum number of iterations for the optimizer used when solving Blatter-Pattyn in solver mode | 100 |
solver.stop_if_no_decrease | boolean | \( dimless \) | This permits to stop the solver if the energy does not decrease | True |
solver.optimizer | string | \( dimless \) | Type of Optimizer for the solver | Adam |
solver.lbfgs | boolean | \( dimless \) | Select the L-BFGS optimizer instead of the Adam optimizer | False |
solver.save_cost | string | \( dimless \) | ||
solver.plot_sol | False | |||
emulator.fieldin | list | \( dimless \) | Input fields of the iceflow emulator | ['thk', 'usurf', 'arrhenius', 'slidingco', 'dX'] |
emulator.retrain_freq | integer | \( dimless \) | Frequency at which the emulator is retrained, 0 means never, 1 means at each time step, 2 means every two time steps, etc. | 10 |
emulator.lr | float | \( dimless \) | Learning rate for the retraining of the emulator | 2e-05 |
emulator.lr_init | None | \( dimless \) | Initial Learning rate for the retraining of the emulator | 0.0001 |
emulator.lr_decay | float | \( dimless \) | Decay learning rate parameter for the training | 0.95 |
emulator.warm_up_it | year | \( dimless \) | Warm-up nb of iteration allowing intense initial training | -10000000000.0 |
emulator.nbit_init | integer | \( dimless \) | Number of iterations done at the first time step for the retraining of the emulator | 1 |
emulator.nbit | integer | \( dimless \) | Number of iterations done at each time step for the retraining of the emulator | 1 |
emulator.framesizemax | integer | \( dimless \) | Size of the patch used for retraining the emulator, this is useful for large size arrays, otherwise the GPU memory can be overloaded | 750 |
emulator.split_patch_method | string | \( dimless \) | Method to split the patch for the emulator: sequential or parallel, sequential is usefull for large size arrays, otherwise the GPU memory can be overloaded (seeting retrain_emulator_framesizemax), parallel permits to apply a patching strategy to the whole domain | sequential |
emulator.pretrained | boolean | \( dimless \) | Do we take a pretrained emulator or start from scratch? | True |
emulator.name | string | \( dimless \) | Directory path of the deep-learning pretrained ice flow model, take from the library if empty string | |
emulator.save_model | boolean | \( dimless \) | Save the iceflow emulator at the end of the simulation | False |
emulator.exclude_borders | integer | \( dimless \) | This is a quick fix of the border issue, otherwise the physics-informed emulator shows zero velocity at the border | 0 |
emulator.optimizer | string | \( dimless \) | Type of Optimizer for the emulator | Adam |
emulator.optimizer_clipnorm | float | \( dimless \) | If set, the gradient of each weight is individually clipped so that its norm is no higher than this value. | 1.0 |
emulator.optimizer_epsilon | float | \( dimless \) | A small constant for numerical stability for the Adam optimizer | 1e-07 |
emulator.save_cost | string | \( dimless \) | ||
emulator.output_directory | string | \( dimless \) | ||
emulator.plot_sol | boolean | \( dimless \) | Perits to plot the solution of the emulator at each time step | False |
emulator.pertubate | boolean | \( dimless \) | pertubate the input field at training | False |
emulator.network.architecture | string | \( dimless \) | This is the type of network, it can be cnn or unet | cnn |
emulator.network.multiple_window_size | integer | \( dimless \) | If a U-net, this forces window size to be a multiple of 2**N | 0 |
emulator.network.activation | string | \( dimless \) | Activation function, it can be lrelu, relu, tanh, sigmoid, etc. | LeakyReLU |
emulator.network.nb_layers | integer | \( dimless \) | Number of layers in the CNN | 16 |
emulator.network.nb_blocks | integer | \( dimless \) | Number of block layers in the U-net | 4 |
emulator.network.nb_out_filter | integer | \( dimless \) | Number of output filters in the CNN | 32 |
emulator.network.conv_ker_size | integer | \( dimless \) | Size of the convolution kernel | 3 |
emulator.network.dropout_rate | float | \( dimless \) | Dropout rate in the CNN | 0 |
emulator.network.weight_initialization | string | \( dimless \) | glorot_uniform, he_normal, lecun_normal | glorot_uniform |
emulator.network.cnn3d_for_vertical | False |