Module pretraining
This module performs pretraining of the ice flow iflo_emulator on a glacier catalog to enhance its performance during glacier forward runs. Pretraining can be a computationally intensive task, taking a few hours to complete. This module should be executed independently, without involving any other IGM modules. Below is an example of a parameter file:
# @package _global_
defaults:
- override /inputs: []
- override /processes: [pretraining, iceflow]
- override /outputs: []
processes:
iceflow:
Nz : 10
multiple_window_size : 8
nb_layers : 16
nb_out_filter : 32
network : cnn
new_friction_param : True
retrain_emulator_lr : 0.0001
solve_nbitmax : 1000
solve_stop_if_no_decrease : False
pretraining:
epochs : 1000
data_dir: data/surflib3d_shape_100
soft_begining: 1000
min_slidingco: 0.01
max_slidingco: 0.4
min_arrhenius: 5
max_arrhenius: 400
To run this module, you first need access to a glacier catalog. A dataset of a glacier catalog (mountain glaciers) commonly used for pretraining IGM emulators is available here: .
After downloading (or generating your own dataset), organize the folder surflib3d_shape_100 into two subfolders: train and test.
Parameters
Default configuration file (pretraining.yaml):
pretraining:
data_dir: "/path/to/tfrecords/"
batch_size: 1
epochs: 1000
experiment_name: "name_of_model"
loss_type: "huber"
learning_rate: 0.0001
out_dir: "/path/to/save/models/"
resume: false
Description of the parameters:
| Name | Description | Default value | Units |
|---|---|---|---|
data_dir
|
Directory of the data of the glacier catalog. | /path/to/tfrecords/ | — |
batch_size
|
Batch size. | 1 | — |
epochs
|
Number of epochs. | 1000 | — |
experiment_name
|
name_of_model | ||
loss_type
|
huber | ||
learning_rate
|
0.0001 | ||
out_dir
|
/path/to/save/models/ | ||
resume
|
False |