Skip to content
Draft
Show file tree
Hide file tree
Changes from all commits
Commits
Show all changes
45 commits
Select commit Hold shift + click to select a range
d6d95c5
Fix channels_last transformation for new registry
tafk7 Jun 16, 2025
858cf56
Add legacy domain fallback test
tafk7 Jun 16, 2025
5036a7a
Remove debug output from old domain test
tafk7 Jun 16, 2025
fdaea24
Merge pull request #1 from tafk7/codex/analyze-and-redesign-customop-…
tafk7 Jun 16, 2025
dfc4bd8
Add alternative customop registration decorator
tafk7 Jun 16, 2025
8fa8463
Merge remote-tracking branch 'upstream/main' into custom/brainsmith
auphelia Jun 20, 2025
e59e558
Added passthrough Quant class
tafk7 Jun 20, 2025
5dfb746
Merge pull request #1 from tafk7/custom/brainsmith-registration-fix
auphelia Jun 23, 2025
30df133
Bring back lost changes from custom/brainsmith branch
auphelia Jun 23, 2025
66b4c68
Merge pull request #195 from auphelia/custom/brainsmith
maltanar Jun 23, 2025
dad06c7
Refined domain-based registration
tafk7 Jul 8, 2025
f6806f6
Refined custom_op registration
tafk7 Jul 16, 2025
f7ab4b5
Dependency resolution
tafk7 Jul 16, 2025
d08c33d
help multithreshold handle 3-dim more efficiently
Jul 16, 2025
d76507a
update extract model config to export config for subgraphs
Jul 17, 2025
fa3e0a8
Removed decorators in favor of pure domain
tafk7 Jul 18, 2025
68346e3
Circular import fix
tafk7 Jul 18, 2025
93fd8d0
Added brainsmith to hide finn ops
tafk7 Jul 18, 2025
9153395
Move to namespace-based domain registration
tafk7 Jul 24, 2025
f2c4ccd
refactor: migrate registry to thread-safe, cache-based architecture
tafk7 Oct 19, 2025
8572cbb
Merge remote-tracking branch 'origin/main' into custom/bransmith_merg…
Oct 30, 2025
ba24ecc
copy in metadata preservation
Oct 30, 2025
da632b9
expand metadata copy coverage to other transforms
Oct 30, 2025
0d9d3e5
add copy metadata props function
Oct 30, 2025
6f3a631
convert missed functions
Oct 30, 2025
59ca168
correct fused node source mistake
Oct 30, 2025
b055ad8
Merge branch 'main' of github.com:fastmachinelearning/qonnx into feat…
Nov 22, 2025
1590606
add metadata preservation to batchnorm transform.
Nov 22, 2025
5690a79
adding more copy metadata nodes.
Nov 22, 2025
9036148
Merge branch 'main' into feature/preserve_metadata
auphelia Nov 24, 2025
7e15015
Merge branch 'feature/preserve_metadata' of github.com:fastmachinelea…
Nov 24, 2025
ae6154b
Experimental metadata functions
tafk7 Nov 24, 2025
7c04726
added overwrite mode flag and basic unit tests
Nov 25, 2025
3085eee
update documention copy_metadata_props
Nov 25, 2025
4379304
add gemm2matmul test
Nov 26, 2025
9cbd803
add batchnorm to affine test
Nov 26, 2025
47c9bb8
force precommit run
Nov 26, 2025
f26f3f5
Merge branch 'main' of github.com:fastmachinelearning/qonnx into feat…
Nov 26, 2025
ec7f7e9
Merge branch 'main' of github.com:fastmachinelearning/qonnx into cust…
Dec 1, 2025
81c12d9
Merge branch 'feature/preserve_metadata' of github.com:fastmachinelea…
Dec 1, 2025
9e1ea73
revert to old version of extract model config
Dec 1, 2025
ceb97a3
Merge branch 'custom/brainsmith' into custom/brainsmith_merge_with_main
Dec 1, 2025
4e38f97
Merge branch 'main' of github.com:fastmachinelearning/qonnx into cust…
Dec 3, 2025
fcd803a
Cleanup comments and unused entrypoint logic
tafk7 Dec 3, 2025
7c154dd
Merge remote-tracking branch 'origin/custom/brainsmith_merge_with_mai…
tafk7 Dec 3, 2025
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
13 changes: 13 additions & 0 deletions docs/overview.rst
Original file line number Diff line number Diff line change
Expand Up @@ -45,6 +45,19 @@ Custom Operations/Nodes

QONNX uses many custom operations (op_type in ONNX NodeProto) that are not defined in the ONNX operator schema. These custom nodes are marked with domain="qonnx.*" in the protobuf to identify them as such. These nodes can represent specific operations that we need for low-bit networks, or operations that are specific to a particular hardware backend. To get more familiar with custom operations and how they are created, please take a look in the Jupyter notebook about CustomOps (see chapter :ref:`tutorials` for details) or directly in the module :py:mod:`qonnx.custom_op`.

Custom ops are automatically discovered through Python module namespaces.
Simply import your CustomOp subclass in the appropriate domain module
(e.g., ``qonnx.custom_op.general`` for general ops) and it will be automatically
available through ``getCustomOp``.

For dynamic registration and querying, use the registry functions:

* ``getCustomOp(node)`` - Get a custom op instance from an ONNX node
* ``is_custom_op(domain, op_type=None)`` - Check if a specific op or domain has custom ops
* ``add_op_to_domain(domain, op_class)`` - Register an op at runtime (for testing)
* ``get_ops_in_domain(domain)`` - List all ops available in a domain
* ``add_domain_alias(domain, module_path)`` - Map a domain to a different module path


Custom ONNX Execution Flow
==========================
Expand Down
34 changes: 9 additions & 25 deletions notebooks/3_custom_op.ipynb
Original file line number Diff line number Diff line change
Expand Up @@ -129,35 +129,26 @@
{
"cell_type": "markdown",
"metadata": {},
"source": [
"To make sure our custom op is available, it needs to be registered. The best practice for this is to create a submodule under `qonnx.custom_op` which includes a `custom_op` dictionary that maps strings (op names) to classes (op implementations). Since we're in a Jupyter notebook we'll just hijack it at runtime like this:"
]
"source": "To make sure our custom op is available, we need to add it to the domain. For production code, you would place your CustomOp class directly in the appropriate module file (e.g., in a file under `qonnx/custom_op/general/`). For testing and experimentation like in this notebook, we can use the `add_op_to_domain` function:"
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"import qonnx.custom_op.general as general\n",
"general.custom_op[\"MyPythonPowerOp\"] = MyPythonPowerOp"
]
"source": "from qonnx.custom_op.registry import add_op_to_domain\n\n# Add our custom op to the general domain\nadd_op_to_domain(\"qonnx.custom_op.general\", MyPythonPowerOp)",
"execution_count": null
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"We can see which custom ops are registered under this submodule by looking at the dictionary:"
]
"source": "We can see which custom ops are available in a domain by using the registry function:"
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"general.custom_op"
]
"source": "from qonnx.custom_op.registry import get_ops_in_domain, is_custom_op\n\n# See all ops in the general domain\nops = get_ops_in_domain(\"qonnx.custom_op.general\")\nprint(f\"Available ops: {[op[0] for op in ops]}\")\n\n# Check if our op is there\nprint(f\"MyPythonPowerOp available: {is_custom_op('qonnx.custom_op.general', 'MyPythonPowerOp')}\")",
"execution_count": null
},
{
"cell_type": "markdown",
Expand Down Expand Up @@ -462,17 +453,10 @@
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"# register our new op\n",
"general.custom_op[\"MyMixedPowerOp\"] = MyMixedPowerOp\n",
"\n",
"# make graph with new op\n",
"mixedop_graph = make_graph(input_shape, 2, op_type = \"MyMixedPowerOp\")\n",
"mixedop_graph.graph.node"
]
"source": "# register our new op\nadd_op_to_domain(\"qonnx.custom_op.general\", MyMixedPowerOp)\n\n# make graph with new op\nmixedop_graph = make_graph(input_shape, 2, op_type = \"MyMixedPowerOp\")\nmixedop_graph.graph.node",
"execution_count": null
},
{
"cell_type": "markdown",
Expand Down Expand Up @@ -744,4 +728,4 @@
},
"nbformat": 4,
"nbformat_minor": 4
}
}
4 changes: 4 additions & 0 deletions setup.cfg
Original file line number Diff line number Diff line change
Expand Up @@ -94,6 +94,10 @@ console_scripts =
qonnx-tensor-stats = qonnx.analysis.tensor_stats:main
pytest_randomly.random_seeder =
qonnx = qonnx.util.random_reseed:reseed
# entry points for custom op modules
qonnx_custom_ops =
qonnx = qonnx.custom_op.general
qonnx_channels_last = qonnx.custom_op.channels_last
# Add here console scripts like:
# console_scripts =
# script_name = qonnx.module:function
Expand Down
12 changes: 6 additions & 6 deletions src/qonnx/custom_op/general/quant.py
Original file line number Diff line number Diff line change
Expand Up @@ -26,12 +26,12 @@
# OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
# OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.

from qonnx.custom_op.general.intquant import IntQuant as Quant
# Import IntQuant to create alias
from qonnx.custom_op.general.intquant import IntQuant

# Re-export functions from intquant for backward compatibility
from qonnx.custom_op.general.intquant import int_quant as quant
from qonnx.custom_op.general.intquant import max_int, min_int, resolve_rounding_mode

Quant = Quant
quant = quant
max_int = max_int
min_int = min_int
resolve_rounding_mode = resolve_rounding_mode
# Create alias for backward compatibility - Quant is just IntQuant
Quant = IntQuant
5 changes: 4 additions & 1 deletion src/qonnx/transformation/batchnorm_to_affine.py
Original file line number Diff line number Diff line change
Expand Up @@ -32,7 +32,7 @@

from qonnx.transformation.base import Transformation
from qonnx.transformation.infer_shapes import InferShapes
from qonnx.util.basic import get_by_name
from qonnx.util.basic import copy_metadata_props, get_by_name


class BatchNormToAffine(Transformation):
Expand Down Expand Up @@ -89,6 +89,9 @@ def apply(self, model):
# create Mul and Add nodes to replace the batchnorm
mul_node = oh.make_node("Mul", [bn_input, mul_const.name], [mul_output.name])
add_node = oh.make_node("Add", [mul_output.name, add_const.name], [bn_output])
# preserve metadata from original batchnorm node
copy_metadata_props(n, mul_node)
copy_metadata_props(n, add_node)
# insert where the batchnorm is to preserve topological ordering
graph.node.insert(node_ind, mul_node)
graph.node.insert(node_ind + 1, add_node)
Expand Down
5 changes: 4 additions & 1 deletion src/qonnx/transformation/bipolar_to_xnor.py
Original file line number Diff line number Diff line change
Expand Up @@ -36,7 +36,7 @@
from qonnx.transformation.base import Transformation
from qonnx.transformation.infer_datatypes import InferDataTypes
from qonnx.transformation.infer_shapes import InferShapes
from qonnx.util.basic import get_by_name
from qonnx.util.basic import copy_metadata_props, get_by_name


class ConvertBipolarMatMulToXnorPopcount(Transformation):
Expand Down Expand Up @@ -132,6 +132,9 @@ def find_prod_mt(x):
# create Mul and Add nodes to replace the batchnorm
mul_node = oh.make_node("Mul", [xnorpcout.name, mul_const.name], [mul_output.name])
add_node = oh.make_node("Add", [mul_output.name, add_const.name], [mm_output])
# preserve metadata from original MatMul node
copy_metadata_props(n, mul_node)
copy_metadata_props(n, add_node)
# insert where the batchnorm is to preserve topological ordering
graph.node.insert(node_ind, mul_node)
graph.node.insert(node_ind + 1, add_node)
Expand Down
5 changes: 4 additions & 1 deletion src/qonnx/transformation/change_datalayout.py
Original file line number Diff line number Diff line change
Expand Up @@ -30,7 +30,7 @@

from qonnx.transformation.base import Transformation
from qonnx.transformation.infer_shapes import InferShapes
from qonnx.util.basic import get_by_name
from qonnx.util.basic import copy_metadata_props, get_by_name


class ChangeDataLayoutQuantAvgPool2d(Transformation):
Expand Down Expand Up @@ -78,6 +78,7 @@ def apply(self, model):
graph.value_info.append(quantavg_out)
quantavg_out = quantavg_out.name
inp_trans_node = helper.make_node("Transpose", [node_input], [inp_trans_out], perm=[0, 2, 3, 1])
copy_metadata_props(n, inp_trans_node)
quantavg_node = helper.make_node(
"QuantAvgPool2d",
[inp_trans_out],
Expand All @@ -90,8 +91,10 @@ def apply(self, model):
signed=signed,
data_layout="NHWC",
)
copy_metadata_props(n, quantavg_node)
# NHWC -> NCHW
out_trans_node = helper.make_node("Transpose", [quantavg_out], [node_output], perm=[0, 3, 1, 2])
copy_metadata_props(n, out_trans_node)
# insert nodes
graph.node.insert(node_ind, inp_trans_node)
graph.node.insert(node_ind + 1, quantavg_node)
Expand Down
10 changes: 8 additions & 2 deletions src/qonnx/transformation/channels_last.py
Original file line number Diff line number Diff line change
Expand Up @@ -40,7 +40,7 @@
from qonnx.transformation.infer_shapes import InferShapes
from qonnx.transformation.make_input_chanlast import MakeInputChannelsLast
from qonnx.transformation.quant_constant_folding import FoldTransposeIntoQuantInit
from qonnx.util.basic import get_by_name
from qonnx.util.basic import copy_metadata_props, get_by_name
from qonnx.util.onnx import is_eltwise_optype

# Standard ONNX nodes which require a ChannelsLast data format to function properly
Expand Down Expand Up @@ -96,6 +96,7 @@ def move_transpose_past_eltwise(transpose_node, eltwise_node, model: ModelWrappe
new_t_inp = model.make_new_valueinfo_name()
inv_perm = np.argsort(perm)
new_transpose_node = helper.make_node("Transpose", [eltwise_inp], [new_t_inp], perm=inv_perm)
copy_metadata_props(transpose_node, new_transpose_node)
t_shape = np.transpose(np.empty(inp_shape), axes=inv_perm).shape
model.set_tensor_shape(new_t_inp, t_shape)
eltwise_node.input[ind] = new_t_inp
Expand All @@ -107,13 +108,15 @@ def move_transpose_past_eltwise(transpose_node, eltwise_node, model: ModelWrappe
model.set_initializer(unsqueeze_param_name, np.asarray(list(range(ndim_inp - ndim)), dtype=np.int64))
unsqueeze_out_name = model.make_new_valueinfo_name()
new_unsqueeze_node = helper.make_node("Unsqueeze", [eltwise_inp, unsqueeze_param_name], [unsqueeze_out_name])
copy_metadata_props(eltwise_inp, new_unsqueeze_node)
unsqueeze_out_shape = np.expand_dims(np.empty(inp_shape), axis=tuple(range(ndim_inp - ndim))).shape
model.set_tensor_shape(unsqueeze_out_name, unsqueeze_out_shape)
model.graph.node.append(new_unsqueeze_node)
# now add inverse transpose
new_t_inp = model.make_new_valueinfo_name()
inv_perm = np.argsort(perm)
new_transpose_node = helper.make_node("Transpose", [unsqueeze_out_name], [new_t_inp], perm=inv_perm)
copy_metadata_props(transpose_node, new_transpose_node)
t_shape = np.transpose(np.empty(unsqueeze_out_shape), axes=inv_perm).shape
model.set_tensor_shape(new_t_inp, t_shape)
eltwise_node.input[ind] = new_t_inp
Expand Down Expand Up @@ -239,6 +242,7 @@ def apply(self, model):
# channels last transpose
inp_trans_node = helper.make_node("Transpose", [inp], [inp_trans_out], perm=to_channels_last_args(ndim))
graph.node.insert(running_node_index, inp_trans_node)
copy_metadata_props(n, inp_trans_node)
running_node_index += 1

# Attach to original node
Expand All @@ -265,6 +269,7 @@ def apply(self, model):
"Transpose", [outp_trans_in], [outp], perm=to_channels_first_args(ndim)
)
graph.node.insert(running_node_index, outp_trans_node)
copy_metadata_props(n, outp_trans_node)
running_node_index += 1

# Attach to original node
Expand Down Expand Up @@ -567,7 +572,8 @@ def apply(self, model):
axis=1,
)
graph.node.insert(node_ind, flat_node)

copy_metadata_props(n, flat_node)

graph_modified = True
else:
warnings.warn(
Expand Down
2 changes: 2 additions & 0 deletions src/qonnx/transformation/extract_conv_bias.py
Original file line number Diff line number Diff line change
Expand Up @@ -30,6 +30,7 @@
from onnx import helper

from qonnx.transformation.base import Transformation
from qonnx.util.basic import copy_metadata_props


class ExtractBiasFromConv(Transformation):
Expand Down Expand Up @@ -75,6 +76,7 @@ def apply(self, model):
[act_add_tensor.name, n.input[2]],
[n.output[0]],
)
copy_metadata_props(n, add_node)
graph.node.insert(node_ind, add_node)

# Repoint Conv output and remove bias tensor
Expand Down
5 changes: 5 additions & 0 deletions src/qonnx/transformation/extract_quant_scale_zeropt.py
Original file line number Diff line number Diff line change
Expand Up @@ -33,6 +33,7 @@
from qonnx.transformation.base import Transformation
from qonnx.transformation.general import GiveUniqueParameterTensors, SortGraph
from qonnx.transformation.remove import RemoveIdentityOps
from qonnx.util.basic import copy_metadata_props


class ExtractQuantScaleZeroPt(Transformation):
Expand Down Expand Up @@ -69,6 +70,7 @@ def apply(self, model: ModelWrapper):
)
graph.value_info.append(inp_scaled)
inp_scale_node = helper.make_node("Div", [running_input, scale_nm], [inp_scaled_nm])
copy_metadata_props(node, inp_scale_node)
graph.node.append(inp_scale_node)
# create new Mul node
# remove scale from Quant node
Expand All @@ -87,6 +89,7 @@ def apply(self, model: ModelWrapper):
)
graph.value_info.append(inp_zeropt)
inp_zeropt_node = helper.make_node("Add", [running_input, zeropt_nm], [inp_zeropt_nm])
copy_metadata_props(node, inp_zeropt_node)
graph.node.append(inp_zeropt_node)
# remove zeropt from Quant node
new_zeropt_nm = model.make_new_valueinfo_name()
Expand All @@ -108,6 +111,7 @@ def apply(self, model: ModelWrapper):
)
graph.value_info.append(out_zeropt)
out_zeropt_node = helper.make_node("Sub", [out_zeropt_nm, zeropt_nm], [final_output])
copy_metadata_props(node, out_zeropt_node)
last_node.output[0] = out_zeropt_nm
graph.node.append(out_zeropt_node)
# important: when tracking a pointer to newly added nodes,
Expand All @@ -127,6 +131,7 @@ def apply(self, model: ModelWrapper):
last_node.output[0] = out_scale_nm
graph.value_info.append(out_scale)
out_scale_node = helper.make_node("Mul", [out_scale_nm, scale_nm], [final_output])
copy_metadata_props(node, out_scale_node)
graph.node.append(out_scale_node)

if extract_scale or extract_zeropt:
Expand Down
9 changes: 7 additions & 2 deletions src/qonnx/transformation/gemm_to_matmul.py
Original file line number Diff line number Diff line change
Expand Up @@ -32,7 +32,7 @@
from qonnx.core.datatype import DataType
from qonnx.transformation.base import Transformation
from qonnx.transformation.remove import RemoveIdentityOps
from qonnx.util.basic import get_by_name
from qonnx.util.basic import copy_metadata_props, get_by_name


class GemmToMatMul(Transformation):
Expand Down Expand Up @@ -76,6 +76,7 @@ def apply(self, model):
)
graph.value_info.append(inp_trans_out)
inp_trans_node = helper.make_node("Transpose", [n.input[0]], [inp_trans_out.name])
copy_metadata_props(n, inp_trans_node)
graph.node.insert(running_node_index, inp_trans_node)
running_node_index += 1
dt = model.get_tensor_datatype(n.input[0])
Expand All @@ -98,6 +99,7 @@ def apply(self, model):
)
graph.value_info.append(inp_trans_out)
inp_trans_node = helper.make_node("Transpose", [n.input[1]], [inp_trans_out.name])
copy_metadata_props(n, inp_trans_node)
graph.node.insert(running_node_index, inp_trans_node)
running_node_index += 1
# Copy over the datatype
Expand All @@ -109,6 +111,7 @@ def apply(self, model):

# Insert MatMul: A * B
matMul_node = helper.make_node("MatMul", [n.input[0], n.input[1]], [n.output[0]])
copy_metadata_props(n, matMul_node)
graph.node.insert(running_node_index, matMul_node)
matMul_node = graph.node[running_node_index]
running_node_index += 1
Expand Down Expand Up @@ -144,6 +147,7 @@ def apply(self, model):
[act_mul_tensor.name, mul_tensor.name],
[n.output[0]],
)
copy_metadata_props(n, mul_node)
graph.node.insert(running_node_index, mul_node)
mul_node_main_branch = graph.node[running_node_index]
running_node_index += 1
Expand Down Expand Up @@ -175,6 +179,7 @@ def apply(self, model):
[n.input[2], mul_tensor.name],
[act_mul_tensor.name],
)
copy_metadata_props(n, mul_node)
graph.node.insert(running_node_index, mul_node)
running_node_index += 1
dt = model.get_tensor_datatype(n.input[2])
Expand All @@ -196,7 +201,7 @@ def apply(self, model):
[act_add_tensor.name, n.input[2]],
[n.output[0]],
)

copy_metadata_props(n, add_node)
graph.node.insert(running_node_index, add_node)
running_node_index += 1

Expand Down
13 changes: 11 additions & 2 deletions src/qonnx/transformation/general.py
Original file line number Diff line number Diff line change
Expand Up @@ -117,15 +117,24 @@ def apply(self, model):

class GiveUniqueNodeNames(Transformation):
"""Give unique names to each node in the graph using enumeration, starting
with given prefix (if specified in the constructor)."""
with given prefix (if specified in the constructor).

def __init__(self, prefix=""):
If only_empty=True, only renames nodes that have empty names, preserving
existing node names. This is useful after transforms that insert nodes
without names, to avoid stripping prefixes from existing nodes."""

def __init__(self, prefix="", only_empty=False):
super().__init__()
self.prefix = prefix
self.only_empty = only_empty

def apply(self, model):
optype_count = {}
for n in model.graph.node:
# Skip nodes that already have names if only_empty=True
if self.only_empty and n.name != "":
continue

if n.op_type not in optype_count.keys():
optype_count[n.op_type] = 0
n.name = "%s%s_%d" % (self.prefix, n.op_type, optype_count[n.op_type])
Expand Down
Loading
Loading