Skip to content

Commit bf18385

Browse files
committed
reformat readme
1 parent c5fdeae commit bf18385

File tree

2 files changed

+14
-3
lines changed

2 files changed

+14
-3
lines changed

README.md

Lines changed: 14 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -1,17 +1,19 @@
11
# pytorch-sparse-utils
22

33
[![Tests](https://github.com/mawright/pytorch-sparse-utils/actions/workflows/tests.yml/badge.svg)](https://github.com/mawright/pytorch-sparse-utils/actions/workflows/tests.yml)
4-
[![Documentation Status](https://github.com/mawright/pytorch-sparse-utils/actions/workflows/docs.yml/badge.svg)](https://mawright.github.io/pytorch-sparse-utils/)
5-
[![License](https://img.shields.io/github/license/mawright/pytorch-sparse-utils)](https://github.com/mawright/pytorch-sparse-utils/blob/main/LICENSE)
64
[![codecov](https://codecov.io/gh/mawright/pytorch-sparse-utils/branch/main/graph/badge.svg)](https://codecov.io/gh/mawright/pytorch-sparse-utils)
5+
[![Documentation Status](https://github.com/mawright/pytorch-sparse-utils/actions/workflows/docs.yml/badge.svg)](https://mawright.github.io/pytorch-sparse-utils/)
76
![Python](https://img.shields.io/badge/python-3.9%20%7C%203.10%20%7C%203.11%20%7C%203.12-blue)
7+
[![License](https://img.shields.io/github/license/mawright/pytorch-sparse-utils)](https://github.com/mawright/pytorch-sparse-utils/blob/main/LICENSE)
88

99
Low-level utilities for PyTorch sparse tensors and operations.
1010

1111
## Introduction
12+
1213
PyTorch's implementation of sparse tensors is lacking full support for many common operations. This repository contains a set of utilities for making PyTorch sparse tensors into more usable general-purpose sparse data structures, particularly in the context of modern neural network architectures like Transformer-based models.
1314

1415
For example, while the basic operation `index_select` has a sparse forward implementation, using it as part of an autograd graph alongside direct manipulation of the sparse tensor's values is not supported:
16+
1517
```python
1618
# Latest PyTorch version (2.7.1) as of this writing
1719
X = torch.sparse_coo_tensor(
@@ -56,6 +58,7 @@ print(X.grad)
5658
```
5759

5860
Output:
61+
5962
```
6063
tensor(indices=tensor([[0, 1, 2, 3],
6164
[0, 1, 2, 3]]),
@@ -64,34 +67,43 @@ tensor(indices=tensor([[0, 1, 2, 3],
6467
```
6568

6669
## Feature Overview
70+
6771
- Autograd-compatible implementations of bulk indexing, sparse tensor shape manipulations, and quick conversions between sparse tensor format and concatenated-batch format for use with position-invariant layers (Linear, BatchNorm, etc.).
6872
- Interoperability with [Pydata sparse](https://sparse.pydata.org/), a numpy-like sparse array implementation, as well as [MinkowskiEngine](https://github.com/NVIDIA/MinkowskiEngine) and [spconv](https://github.com/traveller59/spconv), two popular PyTorch libraries for convolutions on sparse images and volumes.
6973
- Full TorchScript compatibility for performance.
7074
- Extensive unit and property-based tests to ensure correctness and reliability.
7175

7276
## Installation
77+
7378
pytorch-sparse-utils has minimal requirements beyond PyTorch itself. The simplest way to install is to clone this repository and use `pip install`:
79+
7480
```bash
7581
git clone https://github.com/mawright/pytorch-sparse-utils
7682
cd pytorch-sparse-utils
7783
pip install -e . # editable installation
7884
```
85+
7986
To run the test suite, you'll need to install the optional dependencies:
87+
8088
```bash
8189
pip install -e ".[tests]"
8290
```
8391

8492
Due to incompatibilities with newer CUDA versions, MinkowskiEngine and spconv are not installed as part of the base install. For more information on installing those libraries, see their own repositories.
8593

8694
## Documentation
95+
8796
Full documentation is available on [GitHub Pages](https://mawright.github.io/pytorch-sparse-utils/).
8897

8998
## See Also
99+
90100
pytorch-sparse-utils represents a base set of tools for more complex neural-net operations on sparse tensors. For more sparse tensor applications, see the following repositories:
101+
91102
- [nd-rotary-encodings](https://github.com/mawright/nd-rotary-encodings): Fast and memory-efficient rotary positional encodings (RoPE) in PyTorch, with novel algorithm updates for multi-level feature pyramids for object detection and other applications.
92103
- [sparse-transformer-layers](https://github.com/mawright/sparse-transformer-layers): Implementations of Transformer layers tailored to sparse tensors, including variants like Multi-scale Deformable Attention. Features custom gradient checkpointing logic to effectively handle sparse tensors with potentially many nonzero entries.
93104

94105
## Future Plans
106+
95107
- Custom C++/CUDA extensions for the most performance-critical operations
96108
- Performance benchmarks
97109
- Expanded documentation

tests/test_misc.py

Lines changed: 0 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -1,7 +1,6 @@
11
import pytest
22
import torch
33
from torch import Tensor
4-
from contextlib import ExitStack
54

65
from pytorch_sparse_utils.misc import prod, unpack_sparse_tensors, _pytorch_atleast_2_5
76
from pytorch_sparse_utils.validation import (

0 commit comments

Comments
 (0)