Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Ports moveit #3676 and #3682 #3283

Merged
merged 5 commits into from
Feb 6, 2025
Merged

Conversation

rr-mark
Copy link
Contributor

@rr-mark rr-mark commented Jan 30, 2025

Description

Checklist

  • Required by CI: Code is auto formatted using clang-format
  • Extend the tutorials / documentation reference
  • Document API changes relevant to the user in the MIGRATION.md notes
  • Create tests, which fail without this PR reference
  • Include a screenshot if changing a GUI
  • While waiting for someone to review your request, please help review another open pull request to support the maintainers

rr-mark and others added 3 commits January 30, 2025 13:24
…676)

This allows parallel execution + planning.

Also required modifying updateSceneWithCurrentState() to allow skipping a scene update with a new robot state (from CurrentStateMonitor), if the planning scene is currently locked (due to planning).
Otherwise, the CurrentStateMonitor would block too.

Co-authored-by: Robert Haschke <[email protected]>
* Move update of state_update_pending_ to updateSceneWithCurrentState()

* Revert to try_lock

While there are a few other locks except explicit user locks (getPlanningSceneServiceCallback(), collisionObjectCallback(), attachObjectCallback(), newPlanningSceneCallback(), and scenePublishingThread()), these occur rather seldom
(scenePublishingThread() publishes at 2Hz).

Hence, we might indeed balance a non-blocking CSM vs. missed PS updates in favour of CSM.

* Don't block for scene update from stateUpdateTimerCallback too

The timer callback and CSM's state update callbacks are served from the same callback queue, which would block CSM again.

* further locking adaptations

reading dt_state_update_ and last_robot_state_update_wall_time_
does not lead to logic errors, but at most to a skipped or redundant update on corrupted data.
Alternatively we could be on the safe side and turn both variables into std::atomic, but that
would effectively mean locks on every read.

Instead, only set state_update_pending_ as an atomic, which is lockfree in this case.

Co-authored-by: Michael Görner <[email protected]>
@rr-mark rr-mark force-pushed the ports_moveit_#3676_#3682 branch from ea4d332 to 7a7fd55 Compare January 30, 2025 13:25
@codecov-commenter
Copy link

codecov-commenter commented Jan 30, 2025

⚠️ Please install the 'codecov app svg image' to ensure uploads and comments are reliably processed by Codecov.

Codecov Report

Attention: Patch coverage is 88.57143% with 4 lines in your changes missing coverage. Please review.

Project coverage is 45.58%. Comparing base (fbdd8c5) to head (98126c0).
Report is 1 commits behind head on main.

Files with missing lines Patch % Lines
...nning_scene_monitor/src/planning_scene_monitor.cpp 83.34% 4 Missing ⚠️

❗ Your organization needs to install the Codecov GitHub app to enable full functionality.

Additional details and impacted files
@@            Coverage Diff             @@
##             main    #3283      +/-   ##
==========================================
- Coverage   45.60%   45.58%   -0.01%     
==========================================
  Files         716      716              
  Lines       62388    62375      -13     
  Branches     7547     7545       -2     
==========================================
- Hits        28446    28428      -18     
- Misses      33776    33779       +3     
- Partials      166      168       +2     

☔ View full report in Codecov by Sentry.
📢 Have feedback on the report? Share it here.

Copy link
Contributor

@sea-bass sea-bass left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM!

@sea-bass sea-bass added backport-humble Mergify label that triggers a PR backport to Humble backport-jazzy Mergify label that triggers a PR backport to Jazzy labels Feb 4, 2025
@rr-mark
Copy link
Contributor Author

rr-mark commented Feb 4, 2025

Looks like another sporadic test failure

@sea-bass
Copy link
Contributor

sea-bass commented Feb 4, 2025

Looks like another sporadic test failure

Yeah, it's pretty bad lately. I tried looking at this briefly this weekend but on my machine, even the most basic Pilz unit test always exits with an error due to bad memory freeing of shared pointers.

@sea-bass sea-bass added this pull request to the merge queue Feb 6, 2025
Merged via the queue into moveit:main with commit ba35aaa Feb 6, 2025
8 of 9 checks passed
mergify bot pushed a commit that referenced this pull request Feb 6, 2025
* Use separate callback queue + spinner for ExecuteTrajectoryAction (#3676)

This allows parallel execution + planning.

Also required modifying updateSceneWithCurrentState() to allow skipping a scene update with a new robot state (from CurrentStateMonitor), if the planning scene is currently locked (due to planning).
Otherwise, the CurrentStateMonitor would block too.

Co-authored-by: Robert Haschke <[email protected]>

* PSM: simplify state_update_pending_ (#3682)

* Move update of state_update_pending_ to updateSceneWithCurrentState()

* Revert to try_lock

While there are a few other locks except explicit user locks (getPlanningSceneServiceCallback(), collisionObjectCallback(), attachObjectCallback(), newPlanningSceneCallback(), and scenePublishingThread()), these occur rather seldom
(scenePublishingThread() publishes at 2Hz).

Hence, we might indeed balance a non-blocking CSM vs. missed PS updates in favour of CSM.

* Don't block for scene update from stateUpdateTimerCallback too

The timer callback and CSM's state update callbacks are served from the same callback queue, which would block CSM again.

* further locking adaptations

reading dt_state_update_ and last_robot_state_update_wall_time_
does not lead to logic errors, but at most to a skipped or redundant update on corrupted data.
Alternatively we could be on the safe side and turn both variables into std::atomic, but that
would effectively mean locks on every read.

Instead, only set state_update_pending_ as an atomic, which is lockfree in this case.

Co-authored-by: Michael Görner <[email protected]>

* Ports changes to ROS2.

---------

Co-authored-by: Robert Haschke <[email protected]>
Co-authored-by: Robert Haschke <[email protected]>
Co-authored-by: Michael Görner <[email protected]>
Co-authored-by: Sebastian Castro <[email protected]>
Co-authored-by: Sebastian Jahr <[email protected]>
(cherry picked from commit ba35aaa)

# Conflicts:
#	moveit_ros/move_group/src/default_capabilities/execute_trajectory_action_capability.cpp
#	moveit_ros/planning/planning_scene_monitor/include/moveit/planning_scene_monitor/planning_scene_monitor.hpp
#	moveit_ros/planning/planning_scene_monitor/src/planning_scene_monitor.cpp
mergify bot pushed a commit that referenced this pull request Feb 6, 2025
* Use separate callback queue + spinner for ExecuteTrajectoryAction (#3676)

This allows parallel execution + planning.

Also required modifying updateSceneWithCurrentState() to allow skipping a scene update with a new robot state (from CurrentStateMonitor), if the planning scene is currently locked (due to planning).
Otherwise, the CurrentStateMonitor would block too.

Co-authored-by: Robert Haschke <[email protected]>

* PSM: simplify state_update_pending_ (#3682)

* Move update of state_update_pending_ to updateSceneWithCurrentState()

* Revert to try_lock

While there are a few other locks except explicit user locks (getPlanningSceneServiceCallback(), collisionObjectCallback(), attachObjectCallback(), newPlanningSceneCallback(), and scenePublishingThread()), these occur rather seldom
(scenePublishingThread() publishes at 2Hz).

Hence, we might indeed balance a non-blocking CSM vs. missed PS updates in favour of CSM.

* Don't block for scene update from stateUpdateTimerCallback too

The timer callback and CSM's state update callbacks are served from the same callback queue, which would block CSM again.

* further locking adaptations

reading dt_state_update_ and last_robot_state_update_wall_time_
does not lead to logic errors, but at most to a skipped or redundant update on corrupted data.
Alternatively we could be on the safe side and turn both variables into std::atomic, but that
would effectively mean locks on every read.

Instead, only set state_update_pending_ as an atomic, which is lockfree in this case.

Co-authored-by: Michael Görner <[email protected]>

* Ports changes to ROS2.

---------

Co-authored-by: Robert Haschke <[email protected]>
Co-authored-by: Robert Haschke <[email protected]>
Co-authored-by: Michael Görner <[email protected]>
Co-authored-by: Sebastian Castro <[email protected]>
Co-authored-by: Sebastian Jahr <[email protected]>
(cherry picked from commit ba35aaa)
sea-bass pushed a commit that referenced this pull request Feb 6, 2025
* Use separate callback queue + spinner for ExecuteTrajectoryAction (#3676)

This allows parallel execution + planning.

Also required modifying updateSceneWithCurrentState() to allow skipping a scene update with a new robot state (from CurrentStateMonitor), if the planning scene is currently locked (due to planning).
Otherwise, the CurrentStateMonitor would block too.

Co-authored-by: Robert Haschke <[email protected]>

* PSM: simplify state_update_pending_ (#3682)

* Move update of state_update_pending_ to updateSceneWithCurrentState()

* Revert to try_lock

While there are a few other locks except explicit user locks (getPlanningSceneServiceCallback(), collisionObjectCallback(), attachObjectCallback(), newPlanningSceneCallback(), and scenePublishingThread()), these occur rather seldom
(scenePublishingThread() publishes at 2Hz).

Hence, we might indeed balance a non-blocking CSM vs. missed PS updates in favour of CSM.

* Don't block for scene update from stateUpdateTimerCallback too

The timer callback and CSM's state update callbacks are served from the same callback queue, which would block CSM again.

* further locking adaptations

reading dt_state_update_ and last_robot_state_update_wall_time_
does not lead to logic errors, but at most to a skipped or redundant update on corrupted data.
Alternatively we could be on the safe side and turn both variables into std::atomic, but that
would effectively mean locks on every read.

Instead, only set state_update_pending_ as an atomic, which is lockfree in this case.

Co-authored-by: Michael Görner <[email protected]>

* Ports changes to ROS2.

---------

Co-authored-by: Robert Haschke <[email protected]>
Co-authored-by: Michael Görner <[email protected]>
(cherry picked from commit ba35aaa)

Co-authored-by: Mark Johnson <[email protected]>
sea-bass pushed a commit that referenced this pull request Feb 6, 2025
* Ports moveit #3676 and #3682 (#3283)

* Use separate callback queue + spinner for ExecuteTrajectoryAction (#3676)

This allows parallel execution + planning.

Also required modifying updateSceneWithCurrentState() to allow skipping a scene update with a new robot state (from CurrentStateMonitor), if the planning scene is currently locked (due to planning).
Otherwise, the CurrentStateMonitor would block too.

Co-authored-by: Robert Haschke <[email protected]>

* PSM: simplify state_update_pending_ (#3682)

* Move update of state_update_pending_ to updateSceneWithCurrentState()

* Revert to try_lock

While there are a few other locks except explicit user locks (getPlanningSceneServiceCallback(), collisionObjectCallback(), attachObjectCallback(), newPlanningSceneCallback(), and scenePublishingThread()), these occur rather seldom
(scenePublishingThread() publishes at 2Hz).

Hence, we might indeed balance a non-blocking CSM vs. missed PS updates in favour of CSM.

* Don't block for scene update from stateUpdateTimerCallback too

The timer callback and CSM's state update callbacks are served from the same callback queue, which would block CSM again.

* further locking adaptations

reading dt_state_update_ and last_robot_state_update_wall_time_
does not lead to logic errors, but at most to a skipped or redundant update on corrupted data.
Alternatively we could be on the safe side and turn both variables into std::atomic, but that
would effectively mean locks on every read.

Instead, only set state_update_pending_ as an atomic, which is lockfree in this case.

Co-authored-by: Michael Görner <[email protected]>

* Ports changes to ROS2.

---------

Co-authored-by: Robert Haschke <[email protected]>
Co-authored-by: Robert Haschke <[email protected]>
Co-authored-by: Michael Görner <[email protected]>
Co-authored-by: Sebastian Castro <[email protected]>
Co-authored-by: Sebastian Jahr <[email protected]>
(cherry picked from commit ba35aaa)

# Conflicts:
#	moveit_ros/move_group/src/default_capabilities/execute_trajectory_action_capability.cpp
#	moveit_ros/planning/planning_scene_monitor/include/moveit/planning_scene_monitor/planning_scene_monitor.hpp
#	moveit_ros/planning/planning_scene_monitor/src/planning_scene_monitor.cpp

* Resolves merge conflicts. (#3322)

* Resolves merge conflicts.

* Undoes erroneous auto-format.

* Undoes erroneous auto-format.

---------

Co-authored-by: Mark Johnson <[email protected]>
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
backport-humble Mergify label that triggers a PR backport to Humble backport-jazzy Mergify label that triggers a PR backport to Jazzy
Projects
None yet
Development

Successfully merging this pull request may close these issues.

5 participants