-
Notifications
You must be signed in to change notification settings - Fork 0
/
Copy pathresume.tex
331 lines (248 loc) · 14.7 KB
/
resume.tex
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
216
217
218
219
220
221
222
223
224
225
226
227
228
229
230
231
232
233
234
235
236
237
238
239
240
241
242
243
244
245
246
247
248
249
250
251
252
253
254
255
256
257
258
259
260
261
262
263
264
265
266
267
268
269
270
271
272
273
274
275
276
277
278
279
280
281
282
283
284
285
286
287
288
289
290
291
292
293
294
295
296
297
298
299
300
301
302
303
304
305
306
307
308
309
310
311
312
313
314
315
316
317
318
319
320
321
322
323
324
325
326
327
328
329
330
331
% resume.tex
%
\documentclass[]{article}
\usepackage{fullpage}
\usepackage{verbatim}
\usepackage{enumitem}
\setlength{\topmargin}{-1cm}
\setlength{\footskip}{0cm}
\setlength{\textheight}{30cm}
\pagestyle{empty}
\raggedbottom
\raggedright
\setlength{\tabcolsep}{0in}
\begin{document}
\hrulefill
\begin{center}
\Huge{\textbf{David Vadas}}\\
\vspace{0.5cm}
\normalsize
%93 Bedford St, Newtown, NSW 2042, Australia\\
% Mobile: +61 417 650 418\\
Mobile: +44 7435 159 812\\
Email: [email protected]\\ %[email protected]\\
GitHub: https://github.com/dvadas
% Website: \texttt{http://www.cs.usyd.edu.au/$\sim$dvadas1}
\end{center}
\hrulefill
\\
% \section*{Profile}
% \hspace{0.5cm}I'm a programmer with more than 5 years experience working at high-frequency trading companies. I was first employed at Optiver, before being recruited to join
% the Statistical Arbitrage team at Susquehanna. On this very selective team, I contributed at first by working on connectivity and data processing, and later by
% building my own strategies. Our key advantage on the team was a technique for building high-frequency trading strategies that required no
% trader supervision. These statistically optimised strategies were very profitable, and allowed us time for continual improvement and to develop new ideas.
% While at Susquehanna, I developed a brand new strategy using machine learning techniques, which was profitable in our backtest environment.
% \hspace{0.5cm}
% I have a strong background in machine learning, with a PhD in Computational Linguistics and practical experience applying these methods at Susquehanna.
% % I can do more than just build theoretical models though,
% I'm also a capable programmer, I've been writing code since I was a kid, and I'm well-practiced at working in modern programming environments.
% This is a rare combination of skills, which I'm currently applying on a contract assignment at Google.
\section*{Employment History}
\setlength{\tabcolsep}{0.1cm}
\begin{table*}[h!]
\begin{tabular}[h!]{p{12.7cm}l}
\textbf{Google, as an Adecco contractor} & \textbf{Jan 2015 -- Present}\\
% \begin{itemize}[noitemsep,topsep=0pt]
% \item {Working in the Text-To-Speech team, developing a major new Google voice.}
% \item {Writing tools that work with BigTable data, in order to automate processes that were previously done manually.}
\vspace{0.05cm}
\hspace{0.5cm} I'm currently working in the Text-To-Speech (TTS) team, developing a major new voice for Google's production systems. This involves preparing data for the recording process, and ensuring the quality of lines synthesised by the voice. My work on this new voice has resulted in measurably improved performance on the lines that are most important to our users.
\hspace{0.5cm} A key achievement during my time here has been to automate many tasks that were previously done manually. The team has multiple data sources that need to be kept synchronised, and removing the repetitive human effort to do so is a great help for many people on the TTS team.
\hspace{0.5cm} I have gained a lot of experience working with Google technology in this position. I work extensively with BigTables, both programmatically and from the command line. I have written many unit tests with the gUnit framework; in one instance I reduced test time by so much that a ``long" timeout could be removed.
% Finally, the many CLs I have submitted demonstrate that I can contribute at Google.
% \end{itemize}
& \\
\textbf{Susquehanna} & \textbf{Sep 2011 -- Oct 2014}\\
%\hspace{0.5cm}Writing software to facilitate exchange connectivity and process the data& \\
%\hspace{0.5cm}received. Implementing core components of a trading and backtesting system,& \\
%\hspace{0.5cm}as well as creating the statistical trading models themselves.
% \hspace{0.5cm}Creating statistical models for trading strategies, via machine learning on& \\
% \hspace{0.5cm}very large data sets. This involves generating key indicators from market data& \\
% \hspace{0.5cm}and significant analysis in a backtesting framework.
% \multicolumn{2}{l}
% \begin{itemize}[noitemsep,topsep=0pt]
% \item {Built a brand new trading strategy using machine learning techniques and a large cluster. The model was profitable in a backtest environment.}
% \item {Implemented core parts of the real-time trading and backtesting system.}
% % \item {Wrote networking code for market connectivity.}
% \end{itemize}
\vspace{0.05cm}
\hspace{0.5cm} I worked on a specialist high-frequency trading desk that ran strategies without full-time trader supervision. During my time there I developed and optimised a brand new trading strategy that was profitable in a backtest environment. My strategy also provided many insights that were applied in other trading models.
\hspace{0.5cm}I also worked on the trading and backtesting infrastructure, writing key improvements to the core system. This complex C++ code would be used throughout the group environment and by all our strategies. I developed connectivity code to receive tick data from the exchange, and wrote a data processing system to transform the full book tick data into key predictors that were used by the trading strategies.
% }
& \\
\textbf{University of Sydney and Capital Markets CRC} & \textbf{Nov 2010 -- Aug 2011}\\
% \begin{itemize}[noitemsep,topsep=0pt]
% \item {Worked on automatically generating content pages from newspaper articles.}
% \item {Designed and implemented the database linking articles to people and places. Facilitated its use in back-end processing and for front-end display.}
% \item {Designed and implemented the database for an entity linking system.}
% \item {Lectured a Natural Language Processing course and supervised students.}
% \end{itemize}
% \hspace{0.5cm}Building systems for analysing newspaper text as part of an industry partnership& \\
% \hspace{0.5cm}with Fairfax Digital. Other duties included lecturing and supervising students.
\vspace{0.05cm}
\hspace{0.5cm} I worked on an entity linking system that automatically generated content pages from newspaper articles. My key contribution in this position was to design and implement a new database schema to store all the information extracted from our corpus. The system has now been deployed by Fairfax Media, one of the largest media companies in Australia.
\hspace{0.5cm} I lectured a Natural Language Processing course and supervised students.
& \\
\textbf{Optiver} & \textbf{Apr 2008 -- Nov 2010}\\
% \hspace{0.5cm}Development work on multiple components of a high-frequency trading system.&\\
% \hspace{0.5cm}This includes writing software for monitoring speed and success, interpreting&\\
% \hspace{0.5cm}market protocols and an auto-trading tool.
% \begin{itemize}[noitemsep,topsep=0pt]
% \item {Designed and built trade analysis tool for measuring speed and success.}
% \item {Managed a data capture and analysis system distributed across multiple geographic locations that processed huge volumes of data.}
% \item {Implemented a specialised high-frequency trading strategy.}
% \end{itemize}
\vspace{0.05cm}
\hspace{0.5cm} I managed a data capture and analysis system distributed across multiple geographic locations that processed huge volumes of data. The system provided sub-microsecond timing information on our trading strategies, letting the company know where it was most valuable to spend programmer effort.
\hspace{0.5cm} I also worked on a specialised high-frequency trading strategy, increasing the profit that it made.
& \\
\textbf{University of Sydney} & \textbf{2003 -- 2006}\\
% \begin{itemize}[noitemsep,topsep=0pt]
% \item {Tutoring for many programming courses, from high school students through to honours and masters students.}
% \end{itemize}
% \hspace{0.5cm}Tutoring Python programming to high school students.
\vspace{0.05cm}
\hspace{0.5cm} I tutored for many programming courses, from high school students through to honours and masters students.
& \\
%\textbf{Teacher Training Python Workshop} & \textbf{2004} \\
%\hspace{0.5cm}Tutoring Python programming to high school teachers. & \\
% \textbf{Academic Staff, University of Sydney} & \textbf{2003 -- 2005} \\
% \begin{itemize}[noitemsep,topsep=0pt]
% \item{Tutoring a number of classes, with students from 1$^{st}$ and 2$^{nd}$ Year, through to honours and masters students.}
% \end{itemize}
% \hspace{0.5cm}Tutoring a number of classes, with students from 1$^{st}$ and 2$^{nd}$ Year, & \\
% \hspace{0.5cm}through to honours and masters students. & \\
% & \\
\end{tabular}
\end{table*}
\newpage
% \vspace{-3.5cm}
\section*{Education}
\setlength{\tabcolsep}{0.1cm}
\begin{table*}[h!]
\begin{tabular}[h!]{p{12.7cm}l}
\textbf{The University of Sydney} & \\
\textbf{PhD in Natural Language Processing} & \textbf{Mar 2005 -- Apr 2008}\\
\hspace{0.5cm}Thesis:~\textit{Statistical Parsing of Noun Phrase Structure} & \\
% \hspace{0.5cm}Greater annotation and analysis of noun phrase structure allows for better & \\
% \hspace{0.5cm}performance in parsing and other Natural Language Processing systems. & \\
%\vspace{0.2cm}
\textbf{Bachelor of Information Technology (Honours)} & \textbf{Mar 2001 -- Nov 2004}\\
\hspace{0.5cm}First Class Honours in Computer Science -- Grade: 88/100 (\textsc{wam}: 80\%) & \\
% \hspace{0.5cm}Majors in Software Development, Principles of Computer Science, and & \\
% \hspace{0.5cm}Networks and Systems. & \\
\hspace{0.5cm}Majors in Software Development and Principles of Computer Science. & \\
%\vspace{0.2cm}
%\textbf{Honours Research Project} & \\
%\hspace{0.5cm}\textit{POS Tagging Unknown Words using an Unannotated Corpora and} & \\
%\hspace{0.5cm}\textit{Maximum Entropy} & \\
%\hspace{0.5cm}This involved applying information from a very large corpora using & \\
%\hspace{0.5cm}real-valued features in a log-linear model. & \\
%\vspace{0.5cm}
\end{tabular}
\end{table*}
\vspace{-0.5cm}
\section*{Technical Skills}
\begin{itemize}%[noitemsep,topsep=0pt]
\item {Languages: C++ (expert), Python (expert), C (proficient).}
\item {Extensive practical experience working with large-scale data, including applying machine learning techniques, analysis with numpy and scipy, and writing code to run on distributed systems.}
\item {Demonstrated ability with Google systems such as BigTable, gUnit, and Google's C++ library.}
% \item {Proficient in writing low-level and highly optimised code, as well as designing high-level architechture for complex systems.}
\item {Considerable practice at using Linux utilities, e.g. awk, to speed prototyping work.}
\item {Knowledge of how to design efficient database schemas and write complex SQL queries.}
\item {Domain-specific experience in high frequency trading and natural language processing.}
\item {Excellent communication skills from lecturing, writing technical papers, and working with teammates.}
% \item {Able to be productive with many development tools: vim, Visual Studio, git, Subversion, Perforce.}
% automated testing (using Boost Test)
%\item {Operating Systems: Linux/Unix and Windows.}
% \item {Expertise in:}
% \begin{itemize}
% \item {machine learning with large datasets;}
% \item {high frequency trading;}
% \item {writing low-level and highly optimised code;}
% \item {and practical tasks from the natural language processing field.}
% \end{itemize}
\end{itemize}
% \begin{comment}
\section*{Publications}
\textbf{David Vadas} and James R. Curran\\
\textit{Parsing Noun Phrases in the Penn Treebank.}
In Computational Linguistics, 37(4), pages 753--809.
December 2011.
\vbox{}
\textbf{David Vadas} and James R. Curran\\
\textit{Parsing Noun Phrase Structure with {CCG}.}
In Proceedings of the 46th Annual Meeting of the Association of Computational
Linguistics: Human Language Technologies (ACL-08: HLT).
Columbus, OH, USA, June 15--20 2008.
\vbox{}
\textbf{David Vadas} and James R. Curran\\
\textit{Parsing Internal Noun Phrase Structure with Collins' Models.}
In Proceedings of the Australasian Language Technology
Workshop (ALTW-07), pages 109--116. Melbourne, Australia, December 10--11 2007.
\vbox{}
\textbf{David Vadas} and James R. Curran\\
\textit{Large-Scale Supervised Models for Noun Phrase Bracketing.}
In Proceedings of the 10th Conference of the Pacific Association for
Computational Linguistics (PACLING-2007), pages 104--112.
Melbourne, Australia, September 19--21 2007.
\vbox{}
\textbf{David Vadas} and James R. Curran\\
\textit{Adding Noun Phrase Structure to the Penn Treebank.} In
Proceedings of the 45th Annual Meeting of the Association for Computational
Linguistics (ACL-07), pages 240--247. Prague, Czech Republic, June 23--30 2007. \\
\vbox{}
James R. Curran, Stephen Clark, and \textbf{David Vadas}\\
\textit{Multi-Tagging for Lexicalized-Grammar Parsing.} In
Proceedings of the Joint Conference of the International Committee on
Computational Linguistics and the Association for Computational Linguistics
(COLING/ACL-06), pages 697--704. Sydney, Australia, July 17--21 2006. \\
\vbox{}
\textbf{David Vadas} and James R. Curran\\
\textit{Tagging Unknown Words with Raw Text Features.} In Proceedings of the
Australasian Language Technology Workshop (ALTW-05), pages 32--39.
Sydney, Australia, December 10--11 2005. \\
\vbox{}
\textbf{David Vadas} and James R. Curran\\
\textit{Programming With Unrestricted Natural Language.} In Proceedings of the
Australasian Language Technology Workshop (ALTW-05), pages 191--199.
Sydney, Australia, December 10--11 2005. \\
% \end{comment}
\begin{comment}
\section*{Research Experience}
\setlength{\tabcolsep}{0.1cm}
\begin{table*}[h!]
\begin{tabular}[h!]{p{13.5cm}l}
\textbf{The University of Sydney} & \\
Research Assistant, School of I.T. & Nov 2004 -- Jan 2005 \\
\hspace{0.5cm}Developed Intelligent Tutoring Systems (ITS) & \\
Vacation Scholar, School of I.T. (Information Visualisation Group) & Nov 2003 -- Mar 2004 \\
\hspace{0.5cm}Implemented process tree visualisation software & \\
& \\
\end{tabular}
\end{table*}
\section*{Awards \& Achievements}
\begin{table*}[h!]
\begin{tabular}[h]{p{13.5cm}ll}
Awarded William and Catherine McIlrath Scholarship & & 2007 \\
Awarded Australian Bicentennial Scholarship & & 2007 \\
Recipient of the Australian Postgraduate Award (APA) & & 2005 -- 2008 \\
Nominated for Soprano Prize (Best Honours Thesis) & &2004\\
Recipient of Information Visualisation Group Vacation Scholarship, School of I.T. & &2003 \\
Placement on 3$^{rd}$ Year Honour Roll, School of I.T. & &2003 \\
Placement on 2$^{nd}$ Year High Honour Roll, School of I.T. & &2002 \\
Placement on 1$^{st}$ Year Honour Roll, School of I.T. & &2001 \\
\end{tabular}
\end{table*}
\section*{Referees}
\begin{table*}[h!]
\begin{tabular}[]{p{1cm}lp{4cm}l}
&Dr James Curran & & Associate Professor Judy Kay\\
&School of I.T. & & School of I.T.\\
&University of Sydney & & University of Sydney\\
&Ph: (02) 9036 6037 & & Ph: (02) 9351 4502\\
\end{tabular}
\end{table*}
\end{comment}
\end{document}