-
Notifications
You must be signed in to change notification settings - Fork 0
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Revamping Bite Selection UI #17
Comments
As mentioned by Raida, if the final bite selection UI involves users scrolling, then important buttons like "Done Eating" should appear both at the top and bottom so users don't have to scroll again. However, ideally the UI should be designed to not require scrolling in the first place. |
Here is a slidedeck with some initial thoughts of mine. Amal @amalnanavati left feedback for improvement in comments on that. I currently don't have the time to actively work on this issue. So, if anyone in the web app team wants to assign it to themselves, please feel free to. Otherwise, I can self-assign this issue to myself in the future when I am done with my current issues. |
I actually think we should keep this open until we implement the revamped UI into the actual app (and with the ROS dummy node(s)). The mock-ups for the user study is one crucial part of it, but I see this issue as also encompassing the final implementation. |
Tyler indicated that he and most users might prefer Option B. As such, we will be working on issue #44. So, I am closing this issue |
Currently, the bite selection UI shows buttons with the names of food items, and the user selects one of those buttons. This requires the perception system to be able to name the food items.
Instead, for the MVP, we'd like to switch to a system where an image of the plate is displayed, with one of the following options for UI:
Part of this depends on the perception capabilities (e.g., can the perception system take a single point from a user click and segment the object that click was on?), part of it depends on robot capabilities (e.g., can the robot acquire a bite with an arbitrary bounding box the user places?), and part of it depends on user preferences (which of these UIs does the user prefer)? Therefore, we don't yet know which option we will go with. However, someone can still make progress on this by implementing all 3 UIs so we can later decide which works best for the technology stack and user preference. Further, once all 3 UIs are implemented, we could potentially run a small user study to identify which one(s) the user prefers.
The text was updated successfully, but these errors were encountered: