This repo contains the code for generating data, training and inference of the paper RNA: Relightable Neural Assets.
- The blender scripts here require a custom build of blender (see below).
- Only Linux build is supported.
- UV is used for dependency management.
This repository uses uv for dependency management. Install it with the instructions here.
Get the code:
git clone https://github.com/adobe-research/relightable-neural-assets.git
cd relightable-neural-assetsInstall dependencies:
uv venv --python 3.10 # create a virtual environment with python 3.10
uv syncYou can either activate the virtual environment first and then run scripts directly:
source .venv/bin/activateor run scripts with uv:
uv run --no-sync <script>A custom build of blender bpy module is required as an additional dependency. It has the necessary changes to the Cycles renderer to support generation of ray queries, with different per-pixel light directions as described in the paper.
A pre-built python wheel is provided in the releases section of the repo. First download it, and then install it with uv:
source .venv/bin/activate
uv pip install <path_to_wheel>Generating data requires a blender scene with AOVs setup correctly. You can use the scripts/setup_blender_scene.py script to setup the AOVs. Some examples are provided in the data/scenes directory (you'll need to unzip the files). Example configs are in the config/generate directory.
python scripts/generate_dataset.py --mode "rna" --config "config/generate/generate_Lego.yml"You need both the training and validation datasets to run training on an asset. They're h5 files generated in the previous step. Some examples for the large and real-time training configs are provided in the config/train directory.
python scripts/train.py --config "config/train/lego_vis.yml"You can use the following script to do inference (rendering) on a trained model. The inputs to the model are obtained by rendering with bpy, following a similar process to the one used to generate the dataset. Some example configs are in the config/render directory. The yamls reference jsons for frame information.
python scripts/render.py --checkpoint <path_to_ckpt_file> --mode <btf/pointlight>You can export the model weights to a npz file to use in your downstream rendering pipeline.
Example usage:
python scripts/deploy.py <path_to_ckpt_file> --verbose --module NeuralSurfaceTriplaneModule --output <path_to_output_dir> --data_config <path_to_data_config> --dataset <path_to_dataset>