Note: torchlogix is based on the original difflogic package (https://github.com/Felix-Petersen/difflogic/), which serves as the official implementation of the NeurIPS 2022 Paper "Deep Differentiable Logic Gate Networks"
(Paper @ ArXiv) by Felix Petersen et al. As the aforementioned repository is not maintained anymore, torchlogix extends difflogic by bugfixes and new concepts such as learnable thermometer thresholding and the walsh-decomposition-based reparametrization of differentiable logic gates with 4 instead of 16 parameters as described in "WARP-LUTs - Walsh-Assisted Relaxation for Probabilistic Look Up Tables" (Paper @ ArXiv). It also implements convolutional logic gate layers as described in the NeurIPS 2024 Paper "Convolutional Logic Gate Networks (Paper @ ArXiv).
The goal behind differentiable logic gate networks is to solve machine learning tasks by learning combinations of logic
gates, i.e., logic gate networks. As the choice of a logic is conventionally non-differentiable, relaxations are applied
to allow training logic gate networks with gradient-based methods. Specifically, torchlogix combines real-valued logic
and a continuously parameterized approximation of the network. This allows learning which logic gate (out of 16 possible)
is optimal for each neuron. The resulting discretized logic gate networks achieve fast inference speeds, e.g., beyond a
million images of MNIST per second on a single CPU core.
torchlogix is a Python 3.6+ and PyTorch 1.9.0+ based library for training and inference with logic gate networks.
pip install torchlogix # basic
pip install "torchlogix[dev]" # with dev toolsFull documentation is available at: TorchLogix Documentation
- Installation Guide - Detailed installation instructions
- Quick Start - Get started with TorchLogix in minutes
- Logic Gates Guide - Understanding the 16 Boolean operations
- Examples - Complete training examples and tutorials
- API Reference - Comprehensive API documentation
cd docs
pip install -r requirements.txt
make html
open _build/html/index.html # macOSThis library provides a framework for both training and inference with logic gate networks. The following gives an example of a definition of a differentiable logic network model for the MNIST data set:
import torch
from torchlogix.layers import LogicDense, LogicConv2d, OrPooling, GroupSum, LearnableThermometerThresholding
model = torch.nn.Sequential(
LogicConv2d(in_dim=28, num_kernels=64, receptive_field_size=5),
OrPooling(kernel_size=2, stride=2, padding=0),
LogicConv2d(in_dim=12, num_kernels=256, receptive_field_size=3),
torch.nn.Flatten(),
LogicLayer(256*10*10, 16_000),
LogicLayer(16_000, 16_000),
LogicLayer(16_000, 16_000),
GroupSum(k=10, tau=30)
)This model receives a (1,28,28) dimensional input and returns k=10 values corresponding to the 10 classes of MNIST.
The model may be trained, e.g., with a torch.nn.CrossEntropyLoss similar to how other neural networks models are trained in PyTorch.
Notably, the Adam optimizer (torch.optim.Adam) should be used for training and the recommended default learning rate is 0.01 instead of 0.001.
Finally, it is also important to note that the number of neurons in each layer is much higher for logic gate networks compared to
conventional MLP neural networks because logic gate networks are very sparse.
To go into details, for each of these modules, in the following we provide more in-depth examples:
layer = DenseLogic(
in_dim=784, # number of inputs
out_dim=16_000, # number of outputs
device='cuda', # the device (cuda / cpu)
connections='random', # the method for the initialization of the connections
parametrization='raw', # classic 16 weights per node (one per gate) one of two 4-weight parametrizations ('anf' or 'walsh')
weight_init="residual", # weight initialization scheme ("random" or "residual")
forward_sampling="soft" # Method for the foward pass: "soft", "hard", "gumbel_soft", or "gumbel_hard"
)
layer = LogicConv2d(
in_dim=28, # dimension of input (can be two-tuple for non-quadratic shapes)
channels=3, # number of channels of the input (1 for grey-scale)
num_kernels=32, # number of convolutional kernels (filters)
tree_depth=3, # depth of the binary logic tree that make up each kernel
receptive_field_size=3, # comparable to kernel size in ordinary convolutional kernels (can be two-tuple for non-quadratic shapes)
padding=0,
... # all other keyword arguments like dense layer above
)At this point, it is important to discuss the options for device and the provided implementations. Specifically,
torchlogix provides two implementations (both of which work with PyTorch):
pythonthe Python implementation is a substantially slower implementation that is easy to understand as it is implemented directly in Python with PyTorch and does not require any C++ / CUDA extensions. It is compatible withdevice='cpu'anddevice='cuda'.
To aggregate output neurons into a lower dimensional output space, we can use GroupSum, which aggregates a number of output neurons into
a k dimensional output, e.g., k=10 for a 10-dimensional classification setting.
It is important to set the parameter tau, which the sum of neurons is divided by to keep the range reasonable.
As each neuron has a value between 0 and 1 (or in inference a value of 0 or 1), assuming n output neurons of the last LogicLayer,
the range of outputs is [0, n / k / tau].
During training, the model should remain in the PyTorch training mode (.train()), which keeps the model differentiable.
However, we can easily switch the model to a hard / discrete / non-differentiable model by calling model.eval(), i.e., for inference.
Typically, this will simply discretize the model but not make it faster per se.
However, there are two modes that allow for fast inference:
The first option is to use a PackBitsTensor.
PackBitsTensors allow efficient dynamic execution of trained logic gate networks on GPU.
A PackBitsTensor can package a tensor (of shape b x n) with boolean
data type in a way such that each boolean entry requires only a single bit (in contrast to the full byte typically
required by a bool) by packing the bits along the batch dimension. If we choose to pack the bits into the int32 data
type (the options are 8, 16, 32, and 64 bits), we would receive a tensor of shape ceil(b/32) x n of dtype int32.
To create a PackBitsTensor from a boolean tensor data, simply call:
data_bits = torchlogix.PackBitsTensor(data)To apply a model to the PackBitsTensor, simply call:
output = model(data_bits)This requires that the model is in .eval() mode, and if supplied with a PackBitsTensor, will automatically use
a logic gate-based inference on the tensor. This also requires that model.implementation = 'cuda' as the mode is only
implemented in CUDA.
It is notable that, while the model is in .eval() mode, we can still also feed float tensors through the model, in
which case it will simply use a hard variant of the real-valued logics.
The second option is to use a CompiledLogicNet.
This allows especially efficient static execution of a fixed trained logic gate network on CPU.
Specifically, CompiledLogicNet converts a model into efficient C code and can compile this code into a binary that
can then be efficiently run or exported for applications.
The following is an example for creating CompiledLogicNet from a trained model:
compiled_model = torchlogix.CompiledLogicNet(
model=model, # the trained model (should be a `torch.nn.Sequential` with `LogicLayer`s)
num_bits=64, # the number of bits of the datatype used for inference (typically 64 is fastest, should not be larger than batch size)
cpu_compiler='gcc', # the compiler to use for the c code (alternative: clang)
verbose=True
)
compiled_model.compile(
save_lib_path='my_model_binary.so', # the (optional) location for storing the binary such that it can be reused
verbose=True
)
# to apply the model, we need a 2d numpy array of dtype bool, e.g., via `data = data.bool().numpy()`
output = compiled_model(data)This will compile a model into a shared object binary, which is then automatically imported.
To export this to other applications, one may either call the shared object binary from another program or export
the model into C code via compiled_model.get_c_code().
A limitation of the current CompiledLogicNet is that the compilation time can become long for large models.
We note that between publishing the paper and the publication of torchlogix, we have substantially improved the implementations.
Thus, the model inference modes have some deviation from the implementations for the original paper as we have
focussed on making it more scalable, efficient, and easier to apply in applications.
We have especially focussed on modularity and efficiency for larger models and have opted to polish the presented
implementations over publishing a plethora of different competing implementations.
There are experiments on CIFAR-10 in the experiments directory. We will add more soon.
torchlogix is released under the MIT license. See LICENSE for additional details about it.
