Replies: 1 comment
-
python /home/andolab/mmdet_env/mmdeploy/tools/deploy.py /home/andolab/mmdet_env/mmdeploy/configs/mmdet/detection/detection_tensorrt_static-800x1344.py /home/andolab/mmdet_env/configs/mask_rcnn/my_config.py /home/andolab/mmdet_env/epoch_13.pth /home/andolab/mmdet_env/demo.jpg --device cuda:0 --log-level INFO 上記のコマンドを実行しましたが、やはりプラグインが見つからないというエラーです。 [02/05/2025-13:04:33] [TRT] [E] ModelImporter.cpp:951: --- End node --- [02/05/2025-13:04:33] [TRT] [E] ModelImporter.cpp:951: --- End node --- [02/05/2025-13:04:33] [TRT] [E] ModelImporter.cpp:951: --- End node --- [02/05/2025-13:04:33] [TRT] [E] ModelImporter.cpp:951: --- End node --- [02/05/2025-13:04:33] [TRT] [E] ModelImporter.cpp:951: --- End node --- [02/05/2025-13:04:33] [TRT] [E] ModelImporter.cpp:951: --- End node --- [02/05/2025-13:04:33] [TRT] [E] ModelImporter.cpp:951: --- End node --- [02/05/2025-13:04:33] [TRT] [E] ModelImporter.cpp:951: --- End node --- [02/05/2025-13:04:33] [TRT] [E] ModelImporter.cpp:951: --- End node --- [02/05/2025-13:04:33] [TRT] [E] ModelImporter.cpp:951: --- End node --- [02/05/2025-13:04:33] [TRT] [E] ModelImporter.cpp:951: --- End node --- [02/05/2025-13:04:33] [TRT] [E] ModelImporter.cpp:951: --- End node --- [02/05/2025-13:04:33] [TRT] [E] ModelImporter.cpp:951: --- End node --- [02/05/2025-13:04:33] [TRT] [E] ModelImporter.cpp:951: --- End node --- 02/05 13:04:36 - mmengine - ERROR - /home/andolab/mmdet_env/lib/python3.10/site-packages/mmdeploy/apis/core/pipeline_manager.py - pop_mp_output - 80 - TensorRT用のプラグインをビルドするために下記のコマンドを実行しましたが、またエラーが発生します。 In file included from /home/andolab/mmdet_env/mmdeploy/csrc/mmdeploy/backend_ops/tensorrt/common/trt_plugin_base.hpp:4, In file included from /home/andolab/mmdet_env/mmdeploy/csrc/mmdeploy/backend_ops/tensorrt/common/trt_plugin_base.hpp:4, In file included from /usr/include/aarch64-linux-gnu/NvInferRuntimeCommon.h:34, In file included from /home/andolab/mmdet_env/mmdeploy/csrc/mmdeploy/backend_ops/tensorrt/common/trt_plugin_base.hpp:4, In file included from /usr/include/aarch64-linux-gnu/NvInferRuntimeCommon.h:34, In file included from /usr/include/aarch64-linux-gnu/NvInferRuntimeCommon.h:34, mmdeploy1.3.1はTensorRT10.3.0に対応していないということでしょうか |
Beta Was this translation helpful? Give feedback.
-
trtexec --onnx=/home/andolab/mmdet_env/onnx_output/end2end.onnx --saveEngine=/home/andolab/mmdet_env/engine.trt --plugins=/usr/lib/aarch64-linux-gnu/libnvinfer_plugin.so --fp16
上記のコードを用いて、MMDetectionで学習、推論しているMask R-CNN(Resnet101)をTensorRTエンジンに変換して高速化を考えているが、下記のエラーがでます。
プラグインが見つからないというようなエラーです。
TensorRTのプラグインはビルドしてこの通りに配置されています。
/usr/lib/aarch64-linux-gnu/libnvinfer_plugin.so
/usr/lib/aarch64-linux-gnu/libnvinfer_plugin.so.10
/usr/lib/aarch64-linux-gnu/libnvinfer_plugin.so.10.3.0
/usr/lib/lib/libnvinfer_plugin.so
/usr/lib/lib/libnvinfer_plugin.so.10
/usr/lib/lib/libnvinfer_plugin.so.10.3.0
開発環境は以下のとおりです。
Device: Jetson Orin NX
OS: Ubuntu 22.04
Python Version: 3.10
CUDA Version: 12.6
cuDNN Version: 9.6
TensorRT Version: 10.3.0.30
MMdeploy Version: 1.3.1
MMDetection Version: 3.3.0
pytorch Version: 2.3.0
MMDetectionで学習したMask R-CNNモデルでONNXに変換したmodel.onnxをTensorRTエンジンに変換できません。このエラーの解決方法、または別の方法でTensorRTエンジンに変換する方法があれば教えていただきたいです。
[01/31/2025-13:22:51] [E] [TRT] ModelImporter.cpp:951: --- End node ---
[01/31/2025-13:22:51] [E] [TRT] ModelImporter.cpp:953: ERROR: onnxOpCheckers.cpp:780 In function checkFallbackPluginImporter:
[6] creator && "Plugin not found, are the plugin name, version, and namespace correct?"
[01/31/2025-13:22:51] [E] [TRT] ModelImporter.cpp:948: While parsing node number 1047 [MMCVMultiLevelRoiAlign -> "/mask_roi_extractor/MMCVMultiLevelRoiAlign_output_0"]:
[01/31/2025-13:22:51] [E] [TRT] ModelImporter.cpp:950: --- Begin node ---
input: "/Reshape_61_output_0"
input: "/neck/fpn_convs.0/conv/Conv_output_0"
input: "/neck/fpn_convs.1/conv/Conv_output_0"
input: "/neck/fpn_convs.2/conv/Conv_output_0"
input: "/neck/fpn_convs.3/conv/Conv_output_0"
output: "/mask_roi_extractor/MMCVMultiLevelRoiAlign_output_0"
name: "/mask_roi_extractor/MMCVMultiLevelRoiAlign"
op_type: "MMCVMultiLevelRoiAlign"
attribute {
name: "aligned"
i: 1
type: INT
}
attribute {
name: "featmap_strides"
floats: 4
floats: 8
floats: 16
floats: 32
type: FLOATS
}
attribute {
name: "finest_scale"
i: 56
type: INT
}
attribute {
name: "output_height"
i: 14
type: INT
}
attribute {
name: "output_width"
i: 14
type: INT
}
attribute {
name: "pool_mode"
i: 1
type: INT
}
attribute {
name: "roi_scale_factor"
f: 1
type: FLOAT
}
attribute {
name: "sampling_ratio"
i: 0
type: INT
}
domain: "mmdeploy"
[01/31/2025-13:22:51] [E] [TRT] ModelImporter.cpp:951: --- End node ---
[01/31/2025-13:22:51] [E] [TRT] ModelImporter.cpp:953: ERROR: onnxOpCheckers.cpp:780 In function checkFallbackPluginImporter:
[6] creator && "Plugin not found, are the plugin name, version, and namespace correct?"
[01/31/2025-13:22:51] [E] Failed to parse onnx file
[01/31/2025-13:22:51] [I] Finished parsing network model. Parse time: 0.347841
[01/31/2025-13:22:51] [E] Parsing model failed
[01/31/2025-13:22:51] [E] Failed to create engine from model or file.
[01/31/2025-13:22:51] [E] Engine set up failed
&&&& FAILED TensorRT.trtexec [TensorRT v100300] # trtexec --onnx=/home/andolab/mmdet_env/onnx_output/end2end.onnx --saveEngine=/home/andolab/mmdet_env/engine.trt --plugins=/usr/lib/aarch64-linux-gnu/libnvinfer_plugin.so --fp16
Beta Was this translation helpful? Give feedback.
All reactions