-
Notifications
You must be signed in to change notification settings - Fork 671
[MMSIG] Support mmdeploy Docker for Jetson #2587
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Open
FlamingoPg
wants to merge
34
commits into
open-mmlab:main
Choose a base branch
from
FlamingoPg:yinfan98/jetson-docker
base: main
Could not load branches
Branch not found: {{ refName }}
Loading
Could not load tags
Nothing to show
Loading
Are you sure you want to change the base?
Some commits from the old base branch may be removed from the timeline,
and old review comments may become outdated.
Open
Changes from 16 commits
Commits
Show all changes
34 commits
Select commit
Hold shift + click to select a range
df5c9b4
Update README.md
FlamingoPg 02e9770
Create Jetson_docker.md
FlamingoPg 3908c0d
Create Dockerfile
FlamingoPg 59198d1
Create Dockerfile
FlamingoPg 6a6a09c
Merge pull request #1 from yinfan98/patch-3
FlamingoPg 2221b58
Update README.md
FlamingoPg 4ba4e09
Update README_zh-CN.md
FlamingoPg 130a337
Update Jetson_docker.md
FlamingoPg 1dd3ed8
Create Jetson_docker.md
FlamingoPg c5f943b
Update Jetson_docker.md
FlamingoPg e3d15d3
Update Jetson_docker.md
FlamingoPg f6aa719
Update README_zh-CN.md
FlamingoPg 904bee1
Update Jetson_docker.md
FlamingoPg 7ab4bed
Update Dockerfile
FlamingoPg 00161a6
Update Dockerfile
FlamingoPg 1e61f31
Update Dockerfile
FlamingoPg 1f79ff0
Update Dockerfile
FlamingoPg f382574
Update Dockerfile
FlamingoPg 3264283
Update Jetson_docker.md
FlamingoPg 3577922
Update Jetson_docker.md
FlamingoPg 4021170
Update Jetson_docker.md
FlamingoPg d1a2fda
Update Jetson_docker.md
FlamingoPg 3b25e3f
Update Jetson_docker.md
FlamingoPg aa08272
Update Dockerfile
FlamingoPg 80bf688
Update Dockerfile
FlamingoPg 8bef364
Update Jetson_docker.md
FlamingoPg 1d3403c
Update Jetson_docker.md
FlamingoPg 4e47dfc
Update Dockerfile
FlamingoPg 4873416
Update Dockerfile
FlamingoPg ab78255
Update Dockerfile
FlamingoPg 26f427e
Create distribute.py
FlamingoPg 40ca356
Update Dockerfile
FlamingoPg e222c3c
Update Jetson_docker.md
FlamingoPg 14f4f3a
Update Jetson_docker.md
FlamingoPg File filter
Filter by extension
Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
There are no files selected for viewing
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Original file line number | Diff line number | Diff line change |
---|---|---|
@@ -0,0 +1,72 @@ | ||
FROM nvcr.io/nvidia/l4t-pytorch:r32.7.1-pth1.10-py3 | ||
|
||
ARG MMDEPLOY_VERSION=main | ||
ENV NVIDIA_VISIBLE_DEVICE all | ||
ENV NVIDIA_DRIVER_CAPABILITIES all | ||
ENV CUDA_HOME="/usr/local/cuda" | ||
ENV PATH="/usr/local/cuda/bin:${PATH}" | ||
ENV LD_LIBRARY_PATH="/usr/local/cuda/lib64:/usr/local/lib/python3.6/site-packages/opencv-python.libs/${LD_LIBRARY_PATH}" | ||
ENV TENSORRT_DIR="/usr/include/aarch64-linux-gnu" | ||
|
||
ENV DEBIAN_FRONTEND=nointeractive | ||
ENV FORCE_CUDA="1" | ||
|
||
USER root | ||
WORKDIR /root/workspace | ||
|
||
# install dependencies | ||
RUN apt-key adv --keyserver keyserver.ubuntu.com --recv-keys 42D5A192B819C5DA &&\ | ||
apt-get update &&\ | ||
apt-get install -y vim wget libspdlog-dev libssl-dev libpng-dev pkg-config libhdf5-100 libhdf5-dev --no-install-recommends | ||
|
||
RUN python3 -m pip install --upgrade pip &&\ | ||
FlamingoPg marked this conversation as resolved.
Show resolved
Hide resolved
|
||
python3 -m pip install onnx==1.10 versioned-hdf5 | ||
|
||
# install onnxruntime | ||
RUN wget https://nvidia.box.com/shared/static/jy7nqva7l88mq9i8bw3g3sklzf4kcnn2.whl -O onnxruntime_gpu-1.10.0-cp36-cp36m-linux_aarch64.whl &&\ | ||
python3 -m pip install --no-cache-dir onnxruntime_gpu-1.10.0-cp36-cp36m-linux_aarch64.whl | ||
|
||
# install mmcv | ||
RUN git clone --branch 2.x https://github.com/open-mmlab/mmcv.git | ||
RUN cd mmcv &&\ | ||
python3 -m pip install --no-cache-dir opencv-python==4.5.4.60 &&\ | ||
MMCV_WITH_OPS=1 python3 -m pip install -e . | ||
|
||
# build ppl.cv | ||
RUN git clone https://github.com/openppl-public/ppl.cv.git &&\ | ||
echo "export PPLCV_DIR=/root/workspace/ppl.cv" >> ~/.bashrc &&\ | ||
cd ppl.cv &&\ | ||
./build.sh cuda | ||
|
||
# download mmdeploy | ||
RUN git clone --recursive -b $MMDEPLOY_VERSION --depth 1 https://github.com/open-mmlab/mmdeploy | ||
|
||
# build TRT custom op | ||
FlamingoPg marked this conversation as resolved.
Show resolved
Hide resolved
|
||
RUN cd mmdeploy &&\ | ||
mkdir -p build && cd build &&\ | ||
cmake .. \ | ||
-DMMDEPLOY_TARGET_BACKENDS="trt" \ | ||
-DTENSORRT_DIR=TENSORRT_DIR &&\ | ||
make -j$(nproc) && make install | ||
RUN cd mmdeploy &&\ | ||
python3 -m pip install --upgrade setuptools &&\ | ||
python3 -m pip install -e . | ||
|
||
# build mmdeploy | ||
RUN cd mmdeploy &&\ | ||
mkdir -p build && cd build &&\ | ||
cmake .. \ | ||
-DMMDEPLOY_BUILD_SDK=ON \ | ||
-DMMDEPLOY_BUILD_SDK_PYTHON_API=ON \ | ||
-DMMDEPLOY_BUILD_EXAMPLES=ON \ | ||
-DMMDEPLOY_TARGET_DEVICES="cuda;cpu" \ | ||
-DMMDEPLOY_TARGET_BACKENDS="trt" \ | ||
-DTENSORRT_DIR=TENSORRT_DIR \ | ||
-Dpplcv_DIR=/root/workspace/ppl.cv/cuda-build/install/lib/cmake/ppl \ | ||
-DMMDEPLOY_CODEBASES=all && \ | ||
make -j$(nproc) && make install | ||
|
||
ENV MMDeploy_DIR="/root/workspace/mmdeploy/build/install/lib/cmake/MMDeploy" | ||
ENV LD_LIBRARY_PATH="/root/workspace/mmdeploy/build/lib:${BACKUP_LD_LIBRARY_PATH}" | ||
ENV PATH="/root/workspace/mmdeploy/build/bin:${PATH}" | ||
ENV PYTHONPATH="/root/workspace/mmdeploy:${PYTHONPATH}" |
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Original file line number | Diff line number | Diff line change |
---|---|---|
@@ -0,0 +1,69 @@ | ||
FROM nvcr.io/nvidia/l4t-pytorch:r35.2.1-pth2.0-py3 | ||
|
||
ARG MMDEPLOY_VERSION=main | ||
ENV NVIDIA_VISIBLE_DEVICE all | ||
ENV NVIDIA_DRIVER_CAPABILITIES all | ||
ENV CUDA_HOME="/usr/local/cuda" | ||
ENV PATH="/usr/local/cuda/bin:${PATH}" | ||
ENV LD_LIBRARY_PATH="/usr/local/cuda/lib64:/usr/local/lib/python3.8/site-packages/opencv-python.libs${LD_LIBRARY_PATH}" | ||
ENV TENSORRT_DIR="/usr/include/aarch64-linux-gnu" | ||
|
||
ENV DEBIAN_FRONTEND=nointeractive | ||
ENV FORCE_CUDA="1" | ||
|
||
USER root | ||
WORKDIR /root/workspace | ||
|
||
# install dependencies | ||
RUN apt-get update &&\ | ||
apt-get install -y vim wget libspdlog-dev libssl-dev libpng-dev pkg-config libhdf5-103 libhdf5-dev --no-install-recommends | ||
|
||
# install onnx | ||
RUN python3 -m pip install onnx versioned-hdf5 | ||
|
||
# install onnxruntime | ||
RUN wget https://nvidia.box.com/shared/static/mvdcltm9ewdy2d5nurkiqorofz1s53ww.whl -O onnxruntime_gpu-1.15.1.whl &&\ | ||
python3 -m pip install --no-cache-dir onnxruntime_gpu-1.15.1-cp38-cp38-linux_aarch64.whl | ||
|
||
# install mmcv | ||
RUN git clone --branch 2.x https://github.com/open-mmlab/mmcv.git &&\ | ||
python3 -m pip install --no-cache-dir opencv-python==4.5.4.60 opencv-contrib-python==4.5.4.60 opencv-python-headless==4.5.4.60 &&\ | ||
MMCV_WITH_OPS=1 python3 -m pip install -e . | ||
|
||
# build ppl.cv | ||
RUN git clone https://github.com/openppl-public/ppl.cv.git &&\ | ||
echo "export PPLCV_DIR=/root/workspace/ppl.cv" >> ~/.bashrc &&\ | ||
./build.sh cuda | ||
|
||
# download mmdeploy | ||
RUN git clone --recursive -b $MMDEPLOY_VERSION --depth 1 https://github.com/open-mmlab/mmdeploy | ||
|
||
# build TRT custom op | ||
RUN cd mmdeploy &&\ | ||
mkdir -p build && cd build &&\ | ||
cmake .. \ | ||
-DMMDEPLOY_TARGET_BACKENDS="trt" \ | ||
-DTENSORRT_DIR=TENSORRT_DIR &&\ | ||
make -j$(nproc) && make install | ||
RUN cd mmdeploy &&\ | ||
python3 -m pip install --upgrade setuptools &&\ | ||
python3 -m pip install -e . | ||
|
||
# build mmdeploy | ||
RUN cd mmdeploy &&\ | ||
mkdir -p build && cd build &&\ | ||
cmake .. \ | ||
-DMMDEPLOY_BUILD_SDK=ON \ | ||
-DMMDEPLOY_BUILD_SDK_PYTHON_API=ON \ | ||
-DMMDEPLOY_BUILD_EXAMPLES=ON \ | ||
-DMMDEPLOY_TARGET_DEVICES="cuda;cpu" \ | ||
-DMMDEPLOY_TARGET_BACKENDS="trt" \ | ||
-DTENSORRT_DIR=TENSORRT_DIR \ | ||
-Dpplcv_DIR=/root/workspace/ppl.cv/cuda-build/install/lib/cmake/ppl \ | ||
-DMMDEPLOY_CODEBASES=all && \ | ||
make -j$(nproc) && make install | ||
|
||
ENV MMDeploy_DIR="/root/workspace/mmdeploy/build/install/lib/cmake/MMDeploy" | ||
ENV LD_LIBRARY_PATH="/root/workspace/mmdeploy/build/lib:${BACKUP_LD_LIBRARY_PATH}" | ||
ENV PATH="/root/workspace/mmdeploy/build/bin:${PATH}" | ||
ENV PYTHONPATH="/root/workspace/mmdeploy:${PYTHONPATH}" |
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Original file line number | Diff line number | Diff line change |
---|---|---|
@@ -0,0 +1,130 @@ | ||
# Use Jetson Docker Image | ||
|
||
This document guides how to install mmdeploy with [Docker](https://docs.docker.com/get-docker/) on Jetson. | ||
|
||
## Get prebuilt docker images | ||
|
||
MMDeploy provides prebuilt docker images for the convenience of its users on [Docker Hub](https://hub.docker.com/r/openmmlab/mmdeploy). The docker images are built on | ||
the latest and released versions. We release two docker version, for Jetpack=5.1 and Jetpack=4.6.1 | ||
For instance, the image with tag `openmmlab/mmdeploy_jetpack5:v1` is built for Jetpack5.1 and the image with tag `openmmlab/mmdeploy_jetpack4.6.1:v1` is build for Jetpack 4.6.1. | ||
The specifications of the Docker Images are shown below. | ||
|
||
- jetpack5.1 | ||
|
||
| Item | Version | | ||
| :---------: | :---------: | | ||
| Jetpack | 5.1 | | ||
| Python | 3.8.10 | | ||
| Torch | 2.0.0 | | ||
| TorchVision | 0.15.0 | | ||
|
||
- jetpack4.6.1 | ||
|
||
| Item | Version | | ||
| :---------: | :---------: | | ||
| Jetpack | 4.6.1 | | ||
| Python | 3.8.10 | | ||
| Torch | 1.10.0 | | ||
| TorchVision | 0.11.0 | | ||
|
||
- jetpack 5.1 | ||
```shell | ||
export TAG=openmmlab/mmdeploy_jetpack5:v1 | ||
docker pull $TAG | ||
``` | ||
- jetpack 4.6.1 | ||
```shell | ||
export TAG=openmmlab/mmdeploy_jetpack4.6:v1 | ||
docker pull $TAG | ||
``` | ||
## Build docker images (optional) | ||
|
||
If the prebuilt docker images do not meet your requirements, | ||
then you can build your own image by running the following script. | ||
The docker file is `docker/jetson/jetpack5/Dockerfile` and `docker/jetson/jetpack4.6/Dockerfile`, | ||
|
||
```shell | ||
sudo docker build docker/jetson/jetpack5 -t openmmlab/mmdeploy_jetpack5:v1 . | ||
FlamingoPg marked this conversation as resolved.
Show resolved
Hide resolved
|
||
// | ||
sudo docker build docker/jetson/jetpack4.6 -t openmmlab/mmdeploy_jetpack4.6:v1 . | ||
``` | ||
|
||
## Run docker container | ||
|
||
After pulling or building the docker image, you can use `docker run` to launch the docker service: | ||
|
||
```shell | ||
sudo docker run -it --rm --runtime nvidia --network host openmmlab/mmdeploy_jetpack5:v1 | ||
// | ||
sudo docker run -it --rm --runtime nvidia --network host openmmlab/mmdeploy_jetpack4.6:v1 | ||
``` | ||
|
||
## TroubleShooting | ||
If you using the jetpack5, it has some question need to solve. | ||
1. OpenCV problem | ||
if you find import cv2 wrong, can't find the libpng15.so | ||
```shell | ||
ln -s /usr/local/lib/python3.x/dist-packages/opencv-python.libs/* /usr/lib | ||
``` | ||
|
||
2. mmdetection problem | ||
if you find installed the mmdetection, but import the mmdet failed. you should add the mmdet path to PYTHONPATH | ||
FlamingoPg marked this conversation as resolved.
Show resolved
Hide resolved
|
||
```shell | ||
export PYTHONPATH=$PYTHONPATH/your/mmdetection/you/git/clone | ||
``` | ||
|
||
3. Jetson No distributed problem | ||
if you convert the model like [Jetson.md](https://github.com/open-mmlab/mmdeploy/blob/main/docs/en/01-how-to-build/jetsons.md) | ||
you may find torch.distributed has no attribute ReduceOp. | ||
I just issue and make a simple patch, add file jetson_patch.py on ./mmdeploy/tools/ | ||
```python | ||
import torch.distributed | ||
if not torch.distributed.is_available(): | ||
torch.distributed.ReduceOp = lambda: None | ||
``` | ||
and import jetson_patch at the beginning which file you want. | ||
I know is not quietly ellegant, but it works well...(for Jetson AGX Orin) | ||
4. Jetpack with PyTorch 2.0 has some issue | ||
we need to modify torch.onnx._run_symbolic_method | ||
**from** | ||
```python | ||
def _run_symbolic_method(g, op_name, symbolic_fn, args): | ||
r""" | ||
This trampoline function gets invoked for every symbolic method | ||
call from C++. | ||
""" | ||
try: | ||
return symbolic_fn(g, *args) | ||
except TypeError as e: | ||
# Handle the specific case where we didn't successfully dispatch | ||
# to symbolic_fn. Otherwise, the backtrace will have the clues | ||
# you need. | ||
e.args = ("{} (occurred when translating {})".format(e.args[0], op_name),) | ||
raise | ||
``` | ||
**to** | ||
```python | ||
@_beartype.beartype | ||
def _run_symbolic_method(g, op_name, symbolic_fn, args): | ||
r""" | ||
This trampoline function gets invoked for every symbolic method | ||
call from C++. | ||
""" | ||
try: | ||
graph_context = jit_utils.GraphContext( | ||
graph=g, | ||
block=g.block(), | ||
opset=GLOBALS.export_onnx_opset_version, | ||
original_node=None, # type: ignore[arg-type] | ||
params_dict=_params_dict, | ||
env={}, | ||
) | ||
return symbolic_fn(graph_context, *args) | ||
except TypeError as e: | ||
# Handle the specific case where we didn't successfully dispatch | ||
# to symbolic_fn. Otherwise, the backtrace will have the clues | ||
# you need. | ||
e.args = (f"{e.args[0]} (occurred when translating {op_name})",) | ||
raise | ||
``` | ||
Finally we can use Jetpack5.1 && MMDeploy happily:) |
Oops, something went wrong.
Add this suggestion to a batch that can be applied as a single commit.
This suggestion is invalid because no changes were made to the code.
Suggestions cannot be applied while the pull request is closed.
Suggestions cannot be applied while viewing a subset of changes.
Only one suggestion per line can be applied in a batch.
Add this suggestion to a batch that can be applied as a single commit.
Applying suggestions on deleted lines is not supported.
You must change the existing code in this line in order to create a valid suggestion.
Outdated suggestions cannot be applied.
This suggestion has been applied or marked resolved.
Suggestions cannot be applied from pending reviews.
Suggestions cannot be applied on multi-line comments.
Suggestions cannot be applied while the pull request is queued to merge.
Suggestion cannot be applied right now. Please check back later.
Uh oh!
There was an error while loading. Please reload this page.