This is an official GitHub repository for the paper [Lee D, Aune E., Langet N. and Eidsvik J., 2022, "Ensemble and self-supervised learning for improved classification of seismic signals from the ̊Aknes rockslope"].
The paper is implemented in PyTorch in this repository.
🚀 Run the self-supervised learning (SSL) of VNIbCReg:
python sslearn.py --config_ssl configs/ssl.yaml
🚀 Run i) training from scratch, or ii) fine-tuning, or iii) linear evaluation:
python finetune.py --config_ft configs/finetune.yaml
🚀 Run the implementation of the relevant previous study [1] as described in our paper [2].
python original_implementation/train.py --configs original_implementation/config.yaml
Note:
configs/ssl.yamlis set to run VNIbCReg by default.configs/finetune.yamlis set to run training a model (ResNet34) from scratch on 80% of the dataset.original_implementation/config.yamlis set to run training a model (ResNet34) from scratch on 80% of the dataset following the implementation of the relevant previous study [1].- All configuration files can be edited to suit your experimental purpose.
- The dataset is automatically downloaded once you run any of the above codes. The downloading code is defined in
utils/dataset.py. Or, it can be found on https://doi.org/10.6084/m9.figshare.21340101.v1.
Basic annotations are already available in the configuration files. Here, details of some parameters that might not be clear are explained.
-
model_paramsin_channels: input channel sizeout_size_enc: output channel size from an encoderout_size_enc: output channel size from an encoderproj_hid: hidden size of the projector in VIbCRegproj_out: output size of the projector in VIbCRegbackbone_type: a type of backbone. Available backbone types areResNet18Encoder,ResNet34Encoder,ResNet50Encoder, andResNet152Encoder. (forResNet50Encoder, andResNet152Encoder,out_size_encneeds to 2048.)
-
exp_paramsLR: learning ratemodel_save_ep_period: a period for saving a model (epoch)
-
trainer_paramsgpus: indices for gpus to be used.
-
datasetnum_workers:num_workersintorch.utils.data.DataLoader.return_single_spectrogram_train: use of the ensemble prediction during trainingreturn_single_spectrogram_train: use of the ensemble prediction during testing
Parameters that are described above are not specified.
-
load_encoderckpt_fname:nonefor training from scratch.checkpoints/some_saved_model.ckptfor loading a pretrained encoder.
-
exp_paramsfreeze_encoders: freeze the encoder ifTrueto conduct linear evaluationfreeze_bn_stat_train: ifTrue,encoder.eval()is set during training to use the averaged statistics of BatchNorm. This is useful when finetuning with a very-small dataset.
-
datasettrain_data_ratio: can be adjusted to 0.05 or 0.1 for the fine-tuning evaluation in a small-dataset regime.
backbone_with_clf_type: available backbone types are:AlexNet,ResNet18, andResNet34
[1] Langet et al., 2022, "Automated classification of seismic signals recorded on the Åknes rockslope, Western Norway, using a Convolutional Neural Network"
[2] Lee et al., 2022, "Ensemble and self-supervised learning for improved classification of seismic signals from the ̊Aknes rockslope"