- Focuses on protecting prior knowledge while sequentially learning Split CIFAR-100 tasks.
- Provides both continual learning training and an explicit unlearning workflow.
- Results from earlier experiments are archived under
results/.
src/ewc/– package with training entrypoints (main.py,ewc_unlearning.py) plus helpers.results/– text summaries collected from previous experiment runs.requirements.txt– minimal dependency set for the method.
cd repositories/Regularization-Continual-Learning-PyTorchpython -m venv .venv && source .venv/bin/activatepip install -r requirements.txtexport PYTHONPATH=src- Run continual training:
python -m ewc.main
export PYTHONPATH=src
python -m ewc.ewc_unlearning
This repository provides a reference implementation of Elastic Weight Consolidation (EWC)
for continual learning experiments on Split CIFAR-100. It includes training scripts,
an explicit unlearning workflow, helper utilities for dataset splits, and example
experiment outputs in results/.
src/ewc/– Python package with training and unlearning entry points (main.py,ewc_unlearning.py) and supporting modules (utils.py,strategies.py).results/– experiment output summaries and logs.requirements.txt– Python dependencies used for development and experiments.
- Create and activate a virtual environment:
python -m venv .venv
source .venv/bin/activate- Install dependencies:
pip install -r requirements.txt- Make the package importable and run the training example:
export PYTHONPATH=src
python -m ewc.mainexport PYTHONPATH=src
python -m ewc.ewc_unlearningBy default the code downloads CIFAR-100 into ./data. If you prefer a different
location, update the root argument in src/ewc/utils.py or set up a data
directory and point the scripts to it.
- Tests: A minimal smoke test is included under
tests/that verifies the package import. Run tests withpytest(installpytestin your environment). - Packaging: Basic
pyproject.tomlandsetup.cfgare provided for local installation usingpip install -e .. - Style and linting: The project does not enforce a style guide currently; adding
pre-commitandflake8/ruffis recommended for collaborators.
- The implementation is intended as a research reference and is not production hardened. Review training loops, device placement, and checkpointing before using it for large-scale experiments.
- If you want, I can add CI (GitHub Actions) for tests and linting, and a more complete example notebook demonstrating an experiment run.