-
Notifications
You must be signed in to change notification settings - Fork 0
/
Copy pathresearch.html
130 lines (118 loc) · 14.5 KB
/
research.html
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
<!DOCTYPE HTML>
<html lang="en-US">
<head>
<!-- Google tag (gtag.js) -->
<script async src="https://www.googletagmanager.com/gtag/js?id=G-DCC2X7SZBT"></script>
<script>
window.dataLayer = window.dataLayer || [];
function gtag(){dataLayer.push(arguments);}
gtag('js', new Date());
gtag('config', 'G-DCC2X7SZBT');
</script>
<title>fluent robotics lab</title>
<meta charset="utf-8">
<meta name="viewport" content="width=device-width, initial-scale=1, shrink-to-fit=no">
<link rel="stylesheet" href="/style.css" >
<title>fluent robotics lab</title>
<meta name="description" content="Fluent Robotics Lab | University of Michigan">
<meta name="keywords" content="robotics, human-robot interaction, multiagent systems, artificial intelligence">
<link rel="shortcut icon" type="image/png" href="images/favicon.png">
<!-- </style> -->
</head>
<body>
<a href="index"><h1 id="name"> <span id="the">the</span><span id="fluent">fluent</span><span id="robotics">robotics</span><span id="the">lab</span> </h1></a>
<div id="buttons">
<a href="research" class="link" style = "color : #FFCB05"> research </a>
<a href="people" class="link"> people </a>
<a href="publications" class="link"> publications </a>
<a href="videos" class="link"> videos </a>
</div>
<table style="padding-left: 50px; margin: 10px 10px 10px 10px;">
<tr>
<td style="padding-left: 0px;padding-top: 0px;max-width:720px;vertical-align:left; text-align: left;">
<p>Our goal is to enable robots to enhance productivity by fluently working with and around humans in dynamic, multiagent environments like manufacturing sites, warehouses, hospitals, and the home. These environments are complex: humans perform a variety of time-critical tasks, ranging from machine operation to inspection. To be truly helpful, robots need to account for human safety, comfort, and efficiency as they complete their own tasks. Our research contributes a variety of tools to approach this goal, including planning and prediction algorithms informed by mathematical insights and models of human behavior, intuitive human-AI interfaces, and highly dexterous robot hardware, all of which are evaluated in extensive experiments with human subjects. To this end, we work across the following thrusts.</p>
</td>
</tr>
<tr>
<td style="padding-left: 0px;padding-top: 0px;max-width:720px;vertical-align:left; text-align: left;">
<p><img class = "images" src="/tn/images/social-nav.png" alt="attributions" style = "align : top; float : left; margin-right:20px; margin-bottom:20px; border-radius: 0%"></p>
<p><small><b>Crowd navigation experiments. Left: instance from our experimnents at UW, featuring Honda’s experimental ballbot on understanding the transfer of human motion prediction models to social robot navigation [<a href="pdfs/poddar2023pred2nav.pdf">IROS23</a>]. Middle: instance from our experiments evaluating our topology-aware MPC [<a href="pdfs/mavrogiannis2023winding.pdf">RAL23</a>]. Right: video from our crowd navigation user study at Cornell [<a href="pdfs/mavrogiannis2019effects.pdf">HRI19</a>][<a href="pdfs/mavrogiannis2022socialmomentum.pdf">THRI2022</a>].</b></small></p>
</td>
</tr>
<tr>
<td style="padding-left: 0px;padding-top: 0px;max-width:720px;vertical-align:left; text-align: left;">
<h3><b>Social robot navigation</b></h3>
<p> The ability of navigating within human crowds is essential for robots completing important tasks like delivery in dynamic environments. Using insights from social sciences like the "pedestrian bargain" [<a href="https://journals.sagepub.com/doi/10.1177/089124195024003004">WOL95</a>] and Gestalt theories, tools from low-dimensional topology [<a href="pdfs/mavrogiannis2019topology.pdf">IJRR19</a>][<a href="pdfs/mavrogiannis2021hamiltonian.pdf">IJRR21</a>], and technologies like machine learning [<a href="pdfs/mavrogiannis2017learningtopology.pdf">IROS17</a>][<a href="pdfs/roh2020multimodal.pdf">CoRL20</a>][<a href="pdfs/wang2021groupbased.pdf">CoRL21a</a>] and control [<a href="pdfs/mavrogiannis2018socialmomentum.pdf">HRI18</a>][<a href="pdfs/mavrogiannis2023winding.pdf">RAL23</a>], our work seeks to capture fundamental properties of multiagent interaction like cooperation [<a href="pdfs/mavrogiannis2019topology.pdf">IJRR19</a>] and grouping [<a href="pdfs/wang2021groupbased.pdf">CoRL21a</a>] to guide prediction and planning for navigation in complex multiagent domains [<a href="pdfs/mavrogiannis2022analyzing.pdf">ICRA22</a>][<a href="pdfs/mavrogiannis2022implicit.pdf">WAFR22</a>]. Our algorithms have been deployed on multiple real robots [<a href="pdfs/mavrogiannis2022socialmomentum.pdf">THRI22</a>][<a href="https://arxiv.org/pdf/2109.05084.pdf">RAL23</a>] generating safe and efficient behavior that is positively perceived by humans. There is still a lot of work to be done to ensure safety, efficiency and comfort within complex human environments as we detailed in our recent survey [<a href="pdfs/mavrogiannis2023corechallenges.pdf">THRI23</a>].</p>
</td>
</tr>
<tr>
<td style="padding-left: 0px;padding-top: 0px;max-width:720px;vertical-align:left; text-align: left;">
<p><img class = "images" src="/tn/images/braids.png" alt="attributions" style = "align : top; float : left; margin-right:20px; margin-bottom:20px; border-radius: 0%"></p>
<p><small><b>We employ topological braids to model complex multiagent behavior. On the left: a planning framework that employs topological braids to account for cooperative collision avoidance in discrete worlds [<a href="pdfs/mavrogiannis2019topology.pdf">IJRR19</a>]. On the right: braids can succinctly summarize complex real-world traffic [<a href="pdfs/mavrogiannis2022analyzing.pdf">ICRA22</a>]</b>.</small></p>
</td>
</tr>
<tr>
<td style="padding-left: 0px;padding-top: 0px;max-width:720px;vertical-align:left; text-align: left;">
<p><img class = "images" src="/tn/images/driving.png" alt="attributions" style = "align : top; float : left; margin-right:20px; margin-bottom:20px; border-radius: 0%"></p>
<p><small><b>Algorithmic frameworks leveraging topological representations for modeling multiagent dynamics. On the left: a planning framework that treats decentralized navigation as implicit communication of topological information, encoded as braids in traffic scenes [<a href="/pdfs/mavrogiannis2023abstracting.pdf">IJRR23</a>]. On the right: a prediction architecture that conditions trajectory reconstruction on likelihood of topological modes, identified using winding numbers. [<a href="pdfs/roh2020multimodal.pdf">CoRL20</a>].</b></small></p>
</td>
</tr>
<tr>
<td style="padding-left: 0px;padding-top: 0px;max-width:720px;vertical-align:left; text-align: left;">
<h3><b>Modeling humans to build interactive robots</b></h2>
<p>To be truly accepted, robots must abide by human expectations. Our work has leveraged insights from <b>psychology</b>, such as humans’ ``obsession with goals’’ [<a href="https://pubmed.ncbi.nlm.nih.gov/17081489/">CSI06</a>] or the ``presentation of self in everyday life’’ [<a href="https://books.google.com/books/about/The_Presentation_of_Self_in_Everyday_Lif.html?id=Sdt-cDkV8pQC">GOF59</a>] to formalize implicit communication [<a href="pdfs/knepper2017implicit.pdf">HRI17</a>] and produce <b>interpretable</b> motion that conveys the robot’s collision-avoidance strategy [<a href="pdfs/mavrogiannis2018socialmomentum.pdf">HRI18</a>] or how the robot makes decisions [<a href="pdfs/walker2021influencing.pdf">CoRL21b</a>]. When deployed in real-world environments, robots will often encounter failures they cannot recover from by themselves. In those occasions, robots can leverage help from bystanders to keep going [<a href="pdfs/nanavatiwalker2022wandering.pdf">HRI22</a>]. However, if robots want to get help in the long run, they need to moderate their help requests; our planning framework reasons about contextual and individual factors when issuing requests for localization help [<a href="pdfs/nanavati2021modeling.pdf">RSS21</a>].</p>
</p>
</td>
</tr>
</tr>
<tr>
<td style="padding-left: 0px;padding-top: 0px;max-width:720px;vertical-align:left; text-align: left;">
<p><img class = "images" src="/tn/images/walker_CoRL21_framework.png" alt="attributions" style = "align : top; float : left; margin-right:20px; margin-bottom:20px; border-radius: 0%"></p>
<p><small><b>Our active-learning framework for modeling a map of robot trajectories to likely behavioral attributions from a human observer [<a href="https://openreview.net/pdf?id=UIaodSPHNFN">CoRL21b</a>][<a href="https://attributions.nickwalker.us">project website</a>].</b></small></p>
</td>
</tr>
<tr>
<td style="padding-left: 0px;padding-top: 0px;max-width:720px;vertical-align:left; text-align: justify;">
<p><img class = "images" src="/tn/images/asking-for-help.png" alt="attributions" style = "align : top; float : left; margin-right:20px; margin-bottom:20px; border-radius: 0%"></p>
<p><small><b>Asking for help. On the left: planning under uncertainty to determine when to ask a user for localization help [<a href="pdfs/nanavati2021modeling.pdf">RSS21</a>]. On the right: a system that wanders real-world environments by leveraging human help [<a href="pdfs/nanavatiwalker2022wandering.pdf">HRI22</a>].</b></small></p>
</td>
</tr>
<tr>
<td style="padding-left: 0px;padding-top: 0px;max-width:720px;vertical-align:left; text-align: left;">
<h3><b>Dexterous robots that interact with their environment</b></h2>
<p>Many important tasks like assembly and delivery require dexterous robots, capable of robustly interacting with their environment, autonomously or through human feedback. Our research has looked at the design of underactuated mechanisms [<a href="pdfs/zisimatos2014robothands.pdf">IROS14</a>] whose capabilities can be enhanced through the incorporation of braking technology [<a href="pdfs/kontoudis2015prosthetic.pdf">IROS15</a>][<a href="https://arxiv.org/pdf/2204.02460.pdf">preprint22</a>], the development of grasp planning algorithms [<a href="pdfs/mavrogiannis2013sequential.pdf">ICRA13</a>][<a href="pdfs/mavrogiannis2014taskspecific.pdf">ICRA14</a>], and sensing technologies [<a href="pdfs/lancaster2022optical.pdf">IROS22</a>]. This research has been informed by work on understanding of human dexterity [<a href="pdfs/ke2020teleop.pdf">IROS20</a>][<a href="pdfs/mavrogiannis2015anthropomorphism.pdf">IROS15</a>]. On many occasions, human situational awareness can augment robot capabilities, as we showed in complex tasks like chopstick manipulation [<a href="pdfs/ke2020teleop.pdf">IROS20</a>]. In fact, humans have developed sophisticated nonprehensile manipulation strategies like pushing to complete complex tasks in the real world. Inspired by humans, we built a multirobot pushing system that is capable of rearranging cluttered workspaces [<a href="/pdfs/taliathareja2023pushr.pdf">IROS23</a>].</p>
</td>
</tr>
<tr>
<td style="padding-left: 0px;padding-top: 0px;max-width:720px;vertical-align:left; text-align: left;">
<p><img class = "images" src="/tn/images/dexterous.png" alt="attributions" style = "align : top; float : left; margin-right:20px; margin-bottom:20px; border-radius: 0%"></p>
<p><small><b>Human-inspired manipulation. Left: human situational awareness enables the completion of challenging tasks like chopstick manipulation through a teleoperation system [<a href="https://personalrobotics.cs.washington.edu/publications/ke2020teleop.pdf">IROS20</a>]. Right: a multirobot system that generates push-based manipulation plans to reconfigure cluttered workspaces [<a href="/pdfs/taliathareja2023pushr.pdf">IROS23</a>].</b></small></p>
</td>
</tr>
<tr>
<td style="padding-left: 0px;padding-top: 0px;max-width:720px;vertical-align:left; text-align: left;">
<p><img class = "images" src="/tn/images/hands.png" alt="attributions" style = "align : top; float : left; margin-right:20px; margin-bottom:20px; border-radius: 0%"></p>
<p><small><b>Enhancing dexterity via braking technology. Left: electrostatic braking empowers a 10-link robot to perform complex manipulation maneuvers [<a href="https://arxiv.org/pdf/2204.02460.pdf">preprint22</a>]. Right: A brake-equipped underactuated hand is capable of performing complex rolling tasks, leveraging electrostatic braking and proximity sensing [<a href="pdfs/lancaster2022optical.pdf">IROS22</a>].</b></small></p>
</td>
</tr>
<tr>
<td style="padding-left: 0px;padding-top: 0px;max-width:720px;vertical-align:left; text-align: left;">
<p>Common to all of our research is a unifying philosophy: We combine mathematical insights with data-driven techniques to introduce structure and interpretability into models of complex behavior. Using such models we have transferred behaviors from simulation to the real world, in several domains including robot navigation in crowds, multirobot coordination, and manipulation. We strive to open-source our code and datasets on <a href="https://github.com/fluentrobotics">Github</a>.</p>
</td>
</tr>
</table>
</body>
<div id="text">
<hr style="width:100%;margin-left:0">
</div>
<div id="academicbuttons">
<a href="https://robotics.umich.edu/"><img src="images/MRobotics_informal_outlines_digital.svg" height="32" class="footer"/></a>
<a href="https://github.com/fluentrobotics"><img src="images/github_icon.png" alt="Github" height="32"></a>
<a href="http://www.twitter.com/fluentrobotics"> <img src="images/twitter_icon.png" alt="Twitter" height="32;"></a>
<a href="http://www.instagram.com/fluentrobotics"><img src="images/instagram_icon.png" alt="Instagram" height="32"></a>
<a href="https://maps.studentlife.umich.edu/building/ford-robotics-building"><img src="images/pin-icon.png" height="32" class="footer"/></a>
</div>
<div id="text">
© 2023-∞ by the Fluent Robotics Lab; source code adapted from <a href="https://github.com/leonidk/leonidk.github.io">here</a>.<br>
</div>
</html>