Skip to content

Commit

Permalink
Expressly destroy a node's objects before the node.
Browse files Browse the repository at this point in the history
This seems to reduce hangs during test runs described in
ros2/build_farmer#248.

The handles corresponding to the destroyed objects *should* be getting
destroyed explicitly when self.handle.destroy() is called below. It
seems however that when running with Fast-RTPS it's possible to get into
a state where multiple threads are waiting on futexes and none can move
forward. The rclpy context of this apparent deadlock is while clearing
a node's list of publishers or services (possibly others, although
publishers and services were the only ones observed).

I consider this patch to be a workaround rather than a fix.
I think there may either be a race condition between the rcl/rmw layer
and the rmw implementation layer which is being tripped by the
haphazardness of Python's garbage collector or there is a logical
problem with the handle destruction ordering in rclpy that only
Fast-RTPS is sensitive to.

Signed-off-by: Steven! Ragnarök <[email protected]>
  • Loading branch information
nuclearsandwich committed Nov 7, 2019
1 parent e566f3e commit 30600a1
Showing 1 changed file with 12 additions and 6 deletions.
18 changes: 12 additions & 6 deletions rclpy/rclpy/node.py
Original file line number Diff line number Diff line change
Expand Up @@ -1461,12 +1461,18 @@ def destroy_node(self) -> bool:
# It will be destroyed with other publishers below.
self._parameter_event_publisher = None

self.__publishers.clear()
self.__subscriptions.clear()
self.__clients.clear()
self.__services.clear()
self.__timers.clear()
self.__guards.clear()
for p in self.__publishers:
self.destroy_publisher(p)
for s in self.__subscriptions:
self.destroy_subscriber(s)
for c in self.__clients:
self.destroy_client(c)
for s in self.__services:
self.destroy_service(s)
for t in self.__timers:
self.destroy_timer(t)
for g in self.__guards:
self.destroy_guard(g)
self.handle.destroy()
self._wake_executor()

Expand Down

0 comments on commit 30600a1

Please sign in to comment.