You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Users and caregivers need some mental model of future robot motion to decide how to best interact with it. For example:
Consider the case where there is food on the fork that the user doesn't want to eat, and they call a caregiver to remove it. There should be a clear indication of whether the robot will move soon or not (e.g., is it fully stationary or is it just stationary because it's planning) so they can decide whether to approach it. See Atharva's comment here.
Consider the case where the robot is getting close enough to the user that they are getting worries, but not so close that they feel in imminent danger. At that time, they would probably like to know whether the robot will stop soon or keep going to decide whether to hit the e-stop or not.
In both cases, the crucial point is that users' should receive a visual indication of the robot's near-future motions.
One idea is the following:
On screens where the robot will not move, write that on the screen so users' know it is safe to approach the robot.
On screens where the robot will move, show a progress bar so users can know how much longer (roughly) they can expect the robot to move. A ++ is to first show the progress bar as one color for planning time, and then one color for motion time, so users get a very clear idea of what proportion of distance the robot will keep moving.
The text was updated successfully, but these errors were encountered:
Here is a slidedeck with some initial thoughts of mine. The comments on this slidedeck includes feedback for improvement from Amal @amalnanavati. This is the javascript progress bar library that can be useful for implementation.
I currently don't have the time to actively work on this issue. So, if anyone in the web app team wants to assign it to themselves, please feel free to. Otherwise, I can self-assign this issue to myself in the future when I am done with my current issues.
Users and caregivers need some mental model of future robot motion to decide how to best interact with it. For example:
In both cases, the crucial point is that users' should receive a visual indication of the robot's near-future motions.
One idea is the following:
The text was updated successfully, but these errors were encountered: