-
Notifications
You must be signed in to change notification settings - Fork 0
/
Copy pathindex.html
212 lines (181 loc) · 14.8 KB
/
index.html
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
<!DOCTYPE html>
<!--
Plain-Academic by Vasilios Mavroudis
Released under the Simplified BSD License/FreeBSD (2-clause) License.
https://github.com/mavroudisv/plain-academic
-->
<html lang="en">
<head>
<title>Rohan Banerjee</title>
<meta charset="utf-8">
<meta name="viewport" content="width=device-width, initial-scale=1">
<link rel="stylesheet" href="https://maxcdn.bootstrapcdn.com/bootstrap/3.3.6/css/bootstrap.min.css">
<script src="https://ajax.googleapis.com/ajax/libs/jquery/1.12.0/jquery.min.js"></script>
<script src="https://maxcdn.bootstrapcdn.com/bootstrap/3.3.6/js/bootstrap.min.js"></script>
<link href='https://fonts.googleapis.com/css?family=Oswald:700' rel='stylesheet' type='text/css'>
</head>
<body>
<!-- Navigation -->
<nav class="navbar navbar-inverse navbar-static-top" role="navigation">
<div class="container">
<div class="navbar-header">
<button type="button" class="navbar-toggle collapsed" data-toggle="collapse" data-target="#bs-example-navbar-collapse-1">
<span class="sr-only">Toggle navigation</span>
<span class="icon-bar"></span>
<span class="icon-bar"></span>
<span class="icon-bar"></span>
</button>
</div>
<!-- Collect the nav links, forms, and other content for toggling -->
<div class="collapse navbar-collapse" id="bs-example-navbar-collapse-1">
<ul class="nav navbar-nav">
<li><a href="index.html">Home</a></li>
<li><a href="#education">Education</a></li>
<li><a href="#publications">Publications</a></li>
<li><a href="#projects">Projects</a></li>
<li><a href="#teaching">Teaching</a></li>
<li><a href="docs/Banerjee_CV.pdf">CV</a></li>
</ul>
</div>
</div>
</nav>
<!-- Page Content -->
<div class="container">
<div class="row">
<!-- Entries Column -->
<div class="col-md-8" style="height: 150vh;">
<!-- Main Image -->
<img class="img-responsive" src="pics/profilepic.jpeg" width="300" height="50" alt=""><br>
<div style="margin-top:3%; text-align:justify;">
<p> I am a fourth-year PhD candidate in the Computer Science department at <a href="http://www.cs.cornell.edu">Cornell University</a>, working under <a href="https://sites.google.com/site/tapomayukh">Prof. Tapomayukh Bhattacharjee</a>
and <a href="https://sdean.website/">Prof. Sarah Dean</a>. My primary research interest lies at the intersection of machine learning-based methods for sequential decision-making (especially reinforcement learning),
robotics, and probabilistic modeling and inference. Previously in my PhD, I was a member of the <a href="http://cornell-asl.org/main/index.html">Autonomous Systems Lab</a>
under <a href="https://campbell.mae.cornell.edu">Prof. Mark Campbell</a>.</p>
<p>Previously, I was a Research Engineer in <a href="https://www.csail.mit.edu">CSAIL</a> at MIT, working in the
<a href="https://www.csail.mit.edu/research/distributed-robotics-laboratory">Distributed Robotics Laboratory</a>
under <a href="http://danielarus.csail.mit.edu">Prof. Daniela Rus</a>. My primary work focused on autonomous vehicle algorithm validation and testing
using the <a href="http://carla.org">CARLA</a> autonomous driving simulator, focusing on algorithms such as
vehicle navigation using sparse topological maps, dynamic obstacle avoidance, and visual end-to-end learning. I also supported projects
relating to the <a href="https://www.toyota-global.com/innovation/partner_robot/robot/">Toyota Human Support Robot (HSR)</a> platform
in the areas of natural language understanding and high-level task execution.
</p>
<p> Before that, I was an M.Eng. student in the Distributed Robotics Laboratory under the supervision of Prof. Daniela Rus.
My M.Eng. thesis focused on developing CARLA into a useful platform for validating autonomous driving
algorithms. I was also a SuperUROP in the <a href="http://groups.csail.mit.edu/sls/">Spoken Language Systems</a> group at MIT under the supervision of <a href="https://people.csail.mit.edu/jrg/">Dr. Jim Glass</a>,
and a UROP in the <a href="http://acl.mit.edu">Aerospace Controls Laboratory</a> under the supervision of Dr. Golnaz Habibi
and <a href="http://www.mit.edu/~jhow/">Prof. Jonathan How</a>.
</p>
</div>
</div>
<!-- Contact Info on the Sidebar -->
<div class="col-md-4">
<div style="font-family: 'Oswald', sans-serif; font-size: 32px;"><b>Rohan Banerjee</b></div><br>
<p><b>rbb242 [at] cornell [dot] edu</b><br>
<p>Department of Computer Science<br>
Cornell University<br>
<!-- Street<br> -->
<!-- City <br> -->
<!-- Country<br> -->
</p>
</div>
<!-- Links on the Sidebar -->
<div class="col-md-4" style="margin-top:2%">
<dd><a href="https://scholar.google.com/citations?user=cxXPYo8AAAAJ&hl=en">Google Scholar</a></dd>
<!-- <dd><a href="#">Twitter</a></dd> -->
<dd><a href="https://www.linkedin.com/in/rohan-banerjee-26722444/">LinkedIn</a></dd>
<dd><a href="https://github.com/rohanb2018/">GitHub</a></dd>
<dd><a href="https://medium.com/@rohan.b.banerjee">Medium</a></dd>
<dd><a href="https://twitter.com/rohanbbanerjee">Twitter</a></dd>
</div>
<!-- Education -->
<div class="col-md-8" style="height: 100vh;">
<h2 id="education">Education</h2>
<p>M.Eng., Electrical Engineering and Computer Science (MIT), 2019
<br/> Thesis title: <i>Development of a Simulation-Based Platform for Autonomous Vehicle Algorithm Validation </i>
</p>
<p>S.B., Electrical Engineering and Computer Science (MIT), 2018</p>
</div>
<!-- Publications -->
<div class="col-md-8" style="height: 100vh;">
<h2 id="publications">Publications</h2>
<strong>Preprints</strong>
<ul>
<li class="paper" words="add, your, keywords, here"><a href="https://arxiv.org/abs/2405.06908">To Ask or Not To Ask: Human-in-the-loop Contextual Bandits with Applications in Robot-Assisted Feeding</a>,
<b>Rohan Banerjee</b>, Rajat Kumar Jenamani*, Sidharth Vasudev*, Amal Nanavati, Katherine Dimitropoulou, Sarah Dean†, Tapomayukh Bhattacharjee†, arXiv preprint arXiv:2405.06908, 2024.
Under submission. <a href="https://emprise.cs.cornell.edu/hilbiteacquisition/">Project website.</a></li>
</ul>
<strong>Journal Papers</strong><br/>
<ul>
<li class="paper" words="add, your, keywords, here"><a href="https://ieeexplore.ieee.org/stamp/stamp.jsp?arnumber=8957584">Learning Robust Control Policies for End-to-End Autonomous Driving From Data-Driven Simulation</a>, Alexander Amini, Igor Gilitschenski, Jacob Phillips, Julia Moseyko, <b>Rohan Banerjee</b>, Sertac Karaman, Daniela Rus,
IEEE Robotics
and Automation Letters, 2020.</li>
<li class="paper" words="add, your, keywords, here"><a href="https://ieeexplore.ieee.org/stamp/stamp.jsp?arnumber=8936918">Maplite: Autonomous Intersection Navigation Without a Detailed Prior Map</a>, Teddy Ort, Krishna Murthy, <b>Rohan Banerjee</b>, Sai Krishna Gottipati, Dhaivat Bhatt, Igor Gilitschenski, Liam Paull, Daniela Rus
IEEE Robotics
and Automation Letters, 2019. <b><a href="https://www.ieee-ras.org/publications/ra-l/ra-l-paper-awards">RA-L Best Paper Award.</a></b></li>
</ul>
<strong>Workshop Papers</strong><br/>
<ul>
<li class="paper" words="add, your, keywords, here"><a href="https://openreview.net/forum?id=qGvhRgvcIe">To Ask or Not To Ask: Robot-assisted Bite Acquisition with Human-in-the-loop Contextual Bandits</a>,
<b>Rohan Banerjee</b>, Sarah Dean, Tapomayukh Bhattacharjee,
First Workshop on Out-of-Distribution Generalization in Robotics at CoRL 2023, 2023.</li>
<li class="paper" words="add, your, keywords, here"><a href="https://arxiv.org/abs/2312.10557"> Improving Environment Robustness of Deep Reinforcement Learning Approaches for Autonomous Racing Using Bayesian Optimization-based Curriculum Learning</a>,
<b>Rohan Banerjee*</b>, Prishita Ray*, M. Campbell.
IROS Workshop on Learning Robot Super Autonomy, 2023.
</ul>
<strong>Theses</strong>
<ul>
<li class="paper" words="add, your, keywords, here"><a href="https://dspace.mit.edu/handle/1721.1/123003">Development of a simulation-based platform for autonomous vehicle algorithm validation</a>, <b>Rohan Banerjee</b>, M.Eng. Thesis. <b></b></li>
</ul>
<!-- <strong>Technical Reports</strong><br/>
<ul>
<li class="paper" words="add, your, keywords, here"><a href="#">Full title</a>. Details of the report</li>
</ul> -->
</div>
<!-- Projects -->
<div class="col-md-8" style="height: 150vh;">
<h2 id="projects">Projects</h2>
<ul>
<li>3D Point Cloud Clustering Using Small-Variance Asymptotics [6.882, Spring 2018] </li>
<ul>
<li> Authors: Rohan Banerjee </li>
<li> Abstract: Robotic mapping and localization problems rely upon building an accurate model of the environment based on available sensor data (mapping) and using landmarks in the environment to accurately determine the position of the robot (localization). Point clouds, which are collections of 3D points, are a common data format generated from 3D LIDAR sensors. In order to make point cloud data useful to mapping and localization algorithms, we need to meaningfully characterize the structure of the point clouds in a low-dimensional way.
As a first step towards characterizing point cloud structure, we apply small-variance asymptotics clustering algorithms to two Dirichlet Process models <!-- from [5] --> - a DP-GMM model, which is used to model point densities, and a DP-vMF-MM model, which is used to model surface densities. We then verify the robustness of inference for both models by measuring the sensitivity of inference to changes in the underlying number of clusters and in the noise associated with each cluster in toy datasets. </li>
<!-- include report -->
</ul>
<!-- 6.141: include link to website, final project video, come up with my own "abstract" -->
<li>Team Project: Robotics: Science and Systems I [6.141, Spring 2018] </li>
<li>Team Project: Detecting and Reducing Duplicate Posts Among StackExchange Users [6.867, Fall 2017]</li>
<ul>
<li> Authors: Rohan Banerjee, Ryan Chung, Isaac Kontomah </li>
<li> Abstract: This paper seeks to classify duplicate questions in a QA forum. We use the StackExchange dataset, which is comprised of data from 12 different fields including android development, english language, gaming, gis, mathematica, physics, programming, statistics, tex, unix, web design, and wordpress. We tested features pulled from the title, body and author of the post to predict duplicates. Our best model, the neural network, was able to classify testset duplicate posts with a maximum accuracy of 86.95% across all subforums. We were able to identify these duplicates using classifiers trained on these features. We assessed the importance of adding reputation features, namely user reputation and post score, by comparing the performance of our system both with and without these features. Using this tool, we hope to identify potential duplicate posts in StackExchange for moderators to assess and streamline this QA platform. </li>
<!-- include report (if teammates are ok with it?) -->
</ul>
<li>Towards the Development of a Conversational Robotic System with Audio-Visual Localization Capabilities [SuperUROP, 2017]</li>
<ul>
<li> Authors: Rohan Banerjee, Jim Glass </li>
<li> Abstract: Robotic systems that can interact with humans have the potential to fill an important niche in situations that are inherently time-consuming or tedious for humans, such as in healthcare. One component of the human-robot interaction problem involves robotic participation in human spoken conversation, where a robot would effectively respond to verbal instructions and non-verbal cues. In this study, we aim to demonstrate the feasibility of a static system that can localize a speaking subject using a combination of audio and visual localization techniques. We show that the Voice Activity Detector module has the potential for a high speech classification accuracy under certain conditions, while the visual and audio localization modules exhibit tradeoffs between accuracy and range that motivate sensor fusion techniques for producing a source estimate. These results lay the groundwork for the future development of a low-cost enhancement to the Baxter robotic research platform that performs speaker localization, which would allow the platform to engage in conversations with humans. </li>
<!-- SuperUROP project: include final paper -->
</ul>
</ul>
</div>
<!-- Projects -->
<div class="col-md-8" style="height: 100vh;">
<h2 id="teaching">Teaching</h2>
<ul>
<li>
Spring 2021: TA for Foundations of Artificial Intelligence (CS 4700)
</li>
<li>
Fall 2020: TA for Introduction to Machine Learning (CS 4780/5780)
</li>
<li>
Spring 2018, Fall 2018: TA for Introduction to Probability (6.041/6.431)
</li>
</ul>
</div>
</div>
<!-- /.container -->
<!-- Other people may like it too! -->
<a style="color:#b5bec9;font-size:1.8em; float:right;" href="https://github.com/mavroudisv/plain-academic">Website Adapted from: Plain Academic</a>
</body>
</html>