Skip to content

Commit

Permalink
Merge pull request #21 from NVIDIA-ISAAC-ROS/release-3.2
Browse files Browse the repository at this point in the history
Isaac ROS 3.2
  • Loading branch information
jaiveersinghNV authored Dec 11, 2024
2 parents e312e5a + 7629f3e commit cbc7e67
Show file tree
Hide file tree
Showing 19 changed files with 252 additions and 360 deletions.
8 changes: 4 additions & 4 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -55,9 +55,9 @@ This package is powered by [NVIDIA Isaac Transport for ROS (NITROS)](https://dev

### Performance

| Sample Graph<br/><br/> | Input Size<br/><br/> | AGX Orin<br/><br/> | Orin NX<br/><br/> | x86_64 w/ RTX 4060 Ti<br/><br/> |
|-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|--------------------------|--------------------------------------------------------------------------------------------------------------------------------------------------------------|-------------------------------------------------------------------------------------------------------------------------------------------------------------|----------------------------------------------------------------------------------------------------------------------------------------------------------------|
| [Depth Segmentation Node](https://github.com/NVIDIA-ISAAC-ROS/isaac_ros_benchmark/blob/main/benchmarks/isaac_ros_bi3d_benchmark/scripts/isaac_ros_bi3d_node.py)<br/><br/><br/><br/> | 576p<br/><br/><br/><br/> | [45.9 fps](https://github.com/NVIDIA-ISAAC-ROS/isaac_ros_benchmark/blob/main/results/isaac_ros_bi3d_node-agx_orin.json)<br/><br/><br/>76 ms @ 30Hz<br/><br/> | [28.8 fps](https://github.com/NVIDIA-ISAAC-ROS/isaac_ros_benchmark/blob/main/results/isaac_ros_bi3d_node-orin_nx.json)<br/><br/><br/>92 ms @ 30Hz<br/><br/> | [87.9 fps](https://github.com/NVIDIA-ISAAC-ROS/isaac_ros_benchmark/blob/main/results/isaac_ros_bi3d_node-nuc_4060ti.json)<br/><br/><br/>35 ms @ 30Hz<br/><br/> |
| Sample Graph<br/><br/> | Input Size<br/><br/> | AGX Orin<br/><br/> | Orin NX<br/><br/> | x86_64 w/ RTX 4090<br/><br/> |
|-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|--------------------------|--------------------------------------------------------------------------------------------------------------------------------------------------------------|-------------------------------------------------------------------------------------------------------------------------------------------------------------|-------------------------------------------------------------------------------------------------------------------------------------------------------------|
| [Depth Segmentation Node](https://github.com/NVIDIA-ISAAC-ROS/isaac_ros_benchmark/blob/main/benchmarks/isaac_ros_bi3d_benchmark/scripts/isaac_ros_bi3d_node.py)<br/><br/><br/><br/> | 576p<br/><br/><br/><br/> | [45.8 fps](https://github.com/NVIDIA-ISAAC-ROS/isaac_ros_benchmark/blob/main/results/isaac_ros_bi3d_node-agx_orin.json)<br/><br/><br/>79 ms @ 30Hz<br/><br/> | [28.2 fps](https://github.com/NVIDIA-ISAAC-ROS/isaac_ros_benchmark/blob/main/results/isaac_ros_bi3d_node-orin_nx.json)<br/><br/><br/>99 ms @ 30Hz<br/><br/> | [105 fps](https://github.com/NVIDIA-ISAAC-ROS/isaac_ros_benchmark/blob/main/results/isaac_ros_bi3d_node-x86-4090.json)<br/><br/><br/>25 ms @ 30Hz<br/><br/> |

---

Expand All @@ -78,4 +78,4 @@ Please visit the [Isaac ROS Documentation](https://nvidia-isaac-ros.github.io/re

### Latest

Update 2024-09-26: Update for ZED compatibility
Update 2024-12-10: Update to be compatible with JetPack 6.1
6 changes: 6 additions & 0 deletions gxf_isaac_bi3d/CMakeLists.txt
Original file line number Diff line number Diff line change
Expand Up @@ -81,4 +81,10 @@ set_target_properties(${PROJECT_NAME} PROPERTIES
# Install the binary file
install(TARGETS ${PROJECT_NAME} DESTINATION share/${PROJECT_NAME}/gxf/lib)


# Embed versioning information into installed files
ament_index_get_resource(ISAAC_ROS_COMMON_CMAKE_PATH isaac_ros_common_cmake_path isaac_ros_common)
include("${ISAAC_ROS_COMMON_CMAKE_PATH}/isaac_ros_common-version-info.cmake")
generate_version_info(${PROJECT_NAME})

ament_auto_package(INSTALL_TO_SHARE)
29 changes: 13 additions & 16 deletions gxf_isaac_bi3d/gxf/gems/dnn_inferencer/inferencer/Errors.h
Original file line number Diff line number Diff line change
@@ -1,19 +1,16 @@
// SPDX-FileCopyrightText: NVIDIA CORPORATION & AFFILIATES
// Copyright (c) 2021-2024 NVIDIA CORPORATION & AFFILIATES. All rights reserved.
//
// Licensed under the Apache License, Version 2.0 (the "License");
// you may not use this file except in compliance with the License.
// You may obtain a copy of the License at
//
// http://www.apache.org/licenses/LICENSE-2.0
//
// Unless required by applicable law or agreed to in writing, software
// distributed under the License is distributed on an "AS IS" BASIS,
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
// See the License for the specific language governing permissions and
// limitations under the License.
//
// SPDX-License-Identifier: Apache-2.0
/*
* Copyright (c) 2021-2023, NVIDIA CORPORATION & AFFILIATES. All rights reserved.
*
* NVIDIA CORPORATION, its affiliates and licensors retain all intellectual
* property and proprietary rights in and to this material, related
* documentation and any modifications thereto. Any use, reproduction,
* disclosure or distribution of this material and related documentation
* without an express license agreement from NVIDIA CORPORATION or
* its affiliates is strictly prohibited.
*
* SPDX-FileCopyrightText: Copyright (c) 2021 NVIDIA CORPORATION & AFFILIATES
* SPDX-License-Identifier: LicenseRef-NvidiaProprietary
*/

#pragma once

Expand Down
Original file line number Diff line number Diff line change
@@ -1,19 +1,16 @@
// SPDX-FileCopyrightText: NVIDIA CORPORATION & AFFILIATES
// Copyright (c) 2021-2024 NVIDIA CORPORATION & AFFILIATES. All rights reserved.
//
// Licensed under the Apache License, Version 2.0 (the "License");
// you may not use this file except in compliance with the License.
// You may obtain a copy of the License at
//
// http://www.apache.org/licenses/LICENSE-2.0
//
// Unless required by applicable law or agreed to in writing, software
// distributed under the License is distributed on an "AS IS" BASIS,
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
// See the License for the specific language governing permissions and
// limitations under the License.
//
// SPDX-License-Identifier: Apache-2.0
/*
* Copyright (c) 2021-2023, NVIDIA CORPORATION & AFFILIATES. All rights reserved.
*
* NVIDIA CORPORATION, its affiliates and licensors retain all intellectual
* property and proprietary rights in and to this material, related
* documentation and any modifications thereto. Any use, reproduction,
* disclosure or distribution of this material and related documentation
* without an express license agreement from NVIDIA CORPORATION or
* its affiliates is strictly prohibited.
*
* SPDX-FileCopyrightText: Copyright (c) 2021 NVIDIA CORPORATION & AFFILIATES
* SPDX-License-Identifier: LicenseRef-NvidiaProprietary
*/

#pragma once

Expand Down
29 changes: 13 additions & 16 deletions gxf_isaac_bi3d/gxf/gems/dnn_inferencer/inferencer/Inferencer.h
Original file line number Diff line number Diff line change
@@ -1,19 +1,16 @@
// SPDX-FileCopyrightText: NVIDIA CORPORATION & AFFILIATES
// Copyright (c) 2021-2024 NVIDIA CORPORATION & AFFILIATES. All rights reserved.
//
// Licensed under the Apache License, Version 2.0 (the "License");
// you may not use this file except in compliance with the License.
// You may obtain a copy of the License at
//
// http://www.apache.org/licenses/LICENSE-2.0
//
// Unless required by applicable law or agreed to in writing, software
// distributed under the License is distributed on an "AS IS" BASIS,
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
// See the License for the specific language governing permissions and
// limitations under the License.
//
// SPDX-License-Identifier: Apache-2.0
/*
* Copyright (c) 2021-2023, NVIDIA CORPORATION & AFFILIATES. All rights reserved.
*
* NVIDIA CORPORATION, its affiliates and licensors retain all intellectual
* property and proprietary rights in and to this material, related
* documentation and any modifications thereto. Any use, reproduction,
* disclosure or distribution of this material and related documentation
* without an express license agreement from NVIDIA CORPORATION or
* its affiliates is strictly prohibited.
*
* SPDX-FileCopyrightText: Copyright (c) 2021 NVIDIA CORPORATION & AFFILIATES
* SPDX-License-Identifier: LicenseRef-NvidiaProprietary
*/

#pragma once

Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -49,9 +49,8 @@ size_t getDataSize(const std::vector<int64_t>& shape, ChannelType dataType) {

std::error_code TensorRTInferencer::getLayerInfo(LayerInfo& layer, std::string layerName) {
layer.name = layerName;
layer.index = m_inferenceEngine->getBindingIndex(layerName.c_str());
auto dim = m_inferenceEngine->getBindingDimensions(layer.index);
nvinfer1::TensorFormat tensorFormat = m_inferenceEngine->getBindingFormat(layer.index);
auto dim = m_inferenceEngine->getTensorShape(layer.name.c_str());
nvinfer1::TensorFormat tensorFormat = m_inferenceEngine->getTensorFormat(layer.name.c_str());

std::error_code err;
err = getCVCoreChannelLayoutFromTensorRT(layer.layout, tensorFormat);
Expand All @@ -64,7 +63,7 @@ std::error_code TensorRTInferencer::getLayerInfo(LayerInfo& layer, std::string l
}

err = getCVCoreChannelTypeFromTensorRT(layer.dataType,
m_inferenceEngine->getBindingDataType(layer.index));
m_inferenceEngine->getTensorDataType(layer.name.c_str()));
layer.layerSize = getDataSize(layer.shape, layer.dataType);
if (err != make_error_code(ErrorCode::SUCCESS)) {
return ErrorCode::INVALID_ARGUMENT;
Expand Down Expand Up @@ -174,16 +173,15 @@ std::error_code TensorRTInferencer::convertModelToEngine(int32_t dla_core,
}
builderConfig->addOptimizationProfile(optimization_profile);

// Creates TensorRT Engine Plan
std::unique_ptr<nvinfer1::ICudaEngine> engine(
builder->buildEngineWithConfig(*network, *builderConfig));
if (!engine) {
// Creates TensorRT Model stream
std::unique_ptr<nvinfer1::IHostMemory> model_stream(
builder->buildSerializedNetwork(*network, *builderConfig));
if (!model_stream) {
GXF_LOG_ERROR("Failed to build TensorRT engine from model %s.", model_file);
return InferencerErrorCode::INVALID_ARGUMENT;
}

std::unique_ptr<nvinfer1::IHostMemory> model_stream(engine->serialize());
if (!model_stream || model_stream->size() == 0 || model_stream->data() == nullptr) {
if (model_stream->size() == 0 || model_stream->data() == nullptr) {
GXF_LOG_ERROR("Fail to serialize TensorRT Engine.");
return InferencerErrorCode::INVALID_ARGUMENT;
}
Expand Down Expand Up @@ -284,13 +282,14 @@ TensorRTInferencer::TensorRTInferencer(const TensorRTInferenceParams& params)
}

m_hasImplicitBatch = m_inferenceEngine->hasImplicitBatchDimension();
m_bindingsCount = m_inferenceEngine->getNbBindings();
m_ioTensorsCount = m_inferenceEngine->getNbIOTensors();
if (!m_hasImplicitBatch) {
for (size_t i = 0; i < m_bindingsCount; i++) {
if (m_inferenceEngine->bindingIsInput(i)) {
nvinfer1::Dims dims_i(m_inferenceEngine->getBindingDimensions(i));
for (size_t i = 0; i < m_ioTensorsCount; i++) {
const char* name = m_inferenceEngine->getIOTensorName(i);
if (m_inferenceEngine->getTensorIOMode(name) == nvinfer1::TensorIOMode::kINPUT) {
nvinfer1::Dims dims_i(m_inferenceEngine->getTensorShape(name));
nvinfer1::Dims4 inputDims{1, dims_i.d[1], dims_i.d[2], dims_i.d[3]};
m_inferenceContext->setBindingDimensions(i, inputDims);
m_inferenceContext->setInputShape(name, inputDims);
}
}
}
Expand All @@ -299,7 +298,6 @@ TensorRTInferencer::TensorRTInferencer(const TensorRTInferenceParams& params)
if (err != make_error_code(ErrorCode::SUCCESS)) {
throw err;
}
m_buffers.resize(m_bindingsCount);
}

// Set input layer tensor
Expand All @@ -309,7 +307,8 @@ std::error_code TensorRTInferencer::setInput(const TensorBase& trtInputBuffer,
return ErrorCode::INVALID_ARGUMENT;
}
LayerInfo layer = m_modelInfo.inputLayers[inputLayerName];
m_buffers[layer.index] = trtInputBuffer.getData();
m_inferenceContext->setTensorAddress(inputLayerName.c_str(),
trtInputBuffer.getData());
return ErrorCode::SUCCESS;
}

Expand All @@ -320,7 +319,8 @@ std::error_code TensorRTInferencer::setOutput(TensorBase& trtOutputBuffer,
return ErrorCode::INVALID_ARGUMENT;
}
LayerInfo layer = m_modelInfo.outputLayers[outputLayerName];
m_buffers[layer.index] = trtOutputBuffer.getData();
m_inferenceContext->setTensorAddress(outputLayerName.c_str(),
trtOutputBuffer.getData());
return ErrorCode::SUCCESS;
}

Expand All @@ -334,18 +334,18 @@ ModelMetaData TensorRTInferencer::getModelMetaData() const {
std::error_code TensorRTInferencer::infer(size_t batchSize) {
bool err = true;
if (!m_hasImplicitBatch) {
size_t bindingsCount = m_inferenceEngine->getNbBindings();
for (size_t i = 0; i < bindingsCount; i++) {
if (m_inferenceEngine->bindingIsInput(i)) {
nvinfer1::Dims dims_i(m_inferenceEngine->getBindingDimensions(i));
nvinfer1::Dims4 inputDims{static_cast<int>(batchSize), dims_i.d[1],
dims_i.d[2], dims_i.d[3]};
m_inferenceContext->setBindingDimensions(i, inputDims);
size_t ioTensorsCount = m_inferenceEngine->getNbIOTensors();
for (size_t i = 0; i < ioTensorsCount; i++) {
const char* name = m_inferenceEngine->getIOTensorName(i);
if (m_inferenceEngine->getTensorIOMode(name) == nvinfer1::TensorIOMode::kINPUT) {
nvinfer1::Dims dims_i(m_inferenceEngine->getTensorShape(name));
nvinfer1::Dims4 inputDims{1, dims_i.d[1], dims_i.d[2], dims_i.d[3]};
m_inferenceContext->setInputShape(name, inputDims);
}
}
err = m_inferenceContext->enqueueV2(&m_buffers[0], m_cudaStream, nullptr);
err = m_inferenceContext->enqueueV3(m_cudaStream);
} else {
err = m_inferenceContext->enqueue(m_maxBatchSize, &m_buffers[0], m_cudaStream, nullptr);
return InferencerErrorCode::INVALID_ARGUMENT;
}
if (!err) {
return InferencerErrorCode::TENSORRT_INFERENCE_ERROR;
Expand All @@ -360,27 +360,14 @@ std::error_code TensorRTInferencer::setCudaStream(cudaStream_t cudaStream) {
}

std::error_code TensorRTInferencer::unregister(std::string layerName) {
size_t index;
if (m_modelInfo.outputLayers.find(layerName) != m_modelInfo.outputLayers.end()) {
index = m_modelInfo.outputLayers[layerName].index;
} else if (m_modelInfo.inputLayers.find(layerName) != m_modelInfo.inputLayers.end()) {
index = m_modelInfo.inputLayers[layerName].index;
} else {
return ErrorCode::INVALID_ARGUMENT;
}
m_buffers[index] = nullptr;
return ErrorCode::SUCCESS;
}

std::error_code TensorRTInferencer::unregister() {
for (size_t i = 0; i < m_buffers.size(); i++) {
m_buffers[i] = nullptr;
}
return ErrorCode::SUCCESS;
}

TensorRTInferencer::~TensorRTInferencer() {
m_buffers.clear();
}

} // namespace inferencer
Expand Down
Original file line number Diff line number Diff line change
@@ -1,19 +1,16 @@
// SPDX-FileCopyrightText: NVIDIA CORPORATION & AFFILIATES
// Copyright (c) 2021-2024 NVIDIA CORPORATION & AFFILIATES. All rights reserved.
//
// Licensed under the Apache License, Version 2.0 (the "License");
// you may not use this file except in compliance with the License.
// You may obtain a copy of the License at
//
// http://www.apache.org/licenses/LICENSE-2.0
//
// Unless required by applicable law or agreed to in writing, software
// distributed under the License is distributed on an "AS IS" BASIS,
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
// See the License for the specific language governing permissions and
// limitations under the License.
//
// SPDX-License-Identifier: Apache-2.0
/*
* Copyright (c) 2021-2024, NVIDIA CORPORATION & AFFILIATES. All rights reserved.
*
* NVIDIA CORPORATION, its affiliates and licensors retain all intellectual
* property and proprietary rights in and to this material, related
* documentation and any modifications thereto. Any use, reproduction,
* disclosure or distribution of this material and related documentation
* without an express license agreement from NVIDIA CORPORATION or
* its affiliates is strictly prohibited.
*
* SPDX-FileCopyrightText: Copyright (c) 2021-2024 NVIDIA CORPORATION & AFFILIATES
* SPDX-License-Identifier: LicenseRef-NvidiaProprietary
*/

#pragma once

Expand Down Expand Up @@ -77,9 +74,8 @@ class TensorRTInferencer : public IInferenceBackendClient {
nvinfer1::ICudaEngine* m_inferenceEngine;
std::unique_ptr<nvinfer1::ICudaEngine> m_ownedInferenceEngine;
std::unique_ptr<nvinfer1::IExecutionContext> m_inferenceContext;
size_t m_bindingsCount;
size_t m_ioTensorsCount;
ModelMetaData m_modelInfo;
std::vector<void*> m_buffers;
bool m_hasImplicitBatch;
std::vector<char> m_modelEngineStream;
size_t m_modelEngineStreamSize = 0;
Expand Down
29 changes: 13 additions & 16 deletions gxf_isaac_bi3d/gxf/gems/dnn_inferencer/inferencer/TensorRTUtils.h
Original file line number Diff line number Diff line change
@@ -1,19 +1,16 @@
// SPDX-FileCopyrightText: NVIDIA CORPORATION & AFFILIATES
// Copyright (c) 2021-2024 NVIDIA CORPORATION & AFFILIATES. All rights reserved.
//
// Licensed under the Apache License, Version 2.0 (the "License");
// you may not use this file except in compliance with the License.
// You may obtain a copy of the License at
//
// http://www.apache.org/licenses/LICENSE-2.0
//
// Unless required by applicable law or agreed to in writing, software
// distributed under the License is distributed on an "AS IS" BASIS,
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
// See the License for the specific language governing permissions and
// limitations under the License.
//
// SPDX-License-Identifier: Apache-2.0
/*
* Copyright (c) 2021-2023, NVIDIA CORPORATION & AFFILIATES. All rights reserved.
*
* NVIDIA CORPORATION, its affiliates and licensors retain all intellectual
* property and proprietary rights in and to this material, related
* documentation and any modifications thereto. Any use, reproduction,
* disclosure or distribution of this material and related documentation
* without an express license agreement from NVIDIA CORPORATION or
* its affiliates is strictly prohibited.
*
* SPDX-FileCopyrightText: Copyright (c) 2021 NVIDIA CORPORATION & AFFILIATES
* SPDX-License-Identifier: LicenseRef-NvidiaProprietary
*/
#pragma once

#include "NvInferRuntime.h"
Expand Down
Loading

0 comments on commit cbc7e67

Please sign in to comment.