Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

RECOMMEND/VOTE Papers #11

Open
sxjscience opened this issue Feb 16, 2017 · 4 comments
Open

RECOMMEND/VOTE Papers #11

sxjscience opened this issue Feb 16, 2017 · 4 comments

Comments

@sxjscience
Copy link
Member

How to recommend

We can recommend some papers for further discussion under this issue. Include a link to the paper + the conference name and other related information (like the abstract, some basic descriptions, links to samples code or online demonstrations).

Please only include one topic per comment. For example, if you propose to discuss "paper X" which is heavily based on "paper Y" and you believe both have to be read together (possibly over multiple weeks) just create one comment for that. If you propose two unrelated papers please create two comments.

For example, the following markdown format could be used.

[**PAPER-NAME**](PAPER-LINK) (AUTHORS)
(CONFERENCE/JOURNAL)
_ABSTRACT_ 

How to vote

Please vote using the "Thumbs up" emoji 👍. Some papers will be marked as Discussed and you should only vote for the undiscussed papers.

@sxjscience
Copy link
Member Author

See #3 for more examples on how to recommend.

@peterzcc
Copy link

peterzcc commented Feb 16, 2017

Q-Prop: Sample-Efficient Policy Gradient with An Off-Policy Critic
https://openreview.net/forum?id=SJ3rcZcxl&noteId=SJ3rcZcxl
(Shixiang Gu, Timothy Lillicrap, Zoubin Ghahramani, Richard E. Turner, Sergey Levine)
ICLR 2017
Abstract: Model-free deep reinforcement learning (RL) methods have been successful in a wide variety of simulated domains. However, a major obstacle facing deep RL in the real world is the high sample complexity of such methods. Batch policy gradient methods offer stable learning, but at the cost of high variance, which often requires large batches, while TD-style methods, such as off-policy actor-critic and Q-learning, are more sample-efficient but biased, and often require costly hyperparameter sweeps to stabilize. In this work, we aim to develop methods that combine the stability of policy gradients with the efficiency of off-policy RL. We present Q-Prop, a policy gradient method that uses a Taylor expansion of the off-policy critic as a control variate. Q-Prop is both sample efficient and stable, and effectively combines the benefits of on-policy and off-policy methods. We analyze the connection between Q-Prop and existing model-free algorithms, and use control variate theory to derive two variants of Q-Prop with conservative and aggressive adaptation. We show that conservative Q-Prop provides substantial gains in sample efficiency over trust region policy optimization (TRPO) with generalized advantage estimation (GAE), and improves stability over deep deterministic policy gradient (DDPG), the state-of-the-art on-policy and off-policy methods, on OpenAI Gym's MuJoCo continuous control environments.

@ckyeungac
Copy link
Collaborator

Bounded Off-Policy Evaluation with Missing Data for Course Recommendation and Curriculum Design
http://medianetlab.ee.ucla.edu/papers/LoggedStudents.pdf
(William Whoiles and Mihaela van der Schaar,)
ICML 2016
Abstract:
Successfully recommending personalized course schedules is a difficult problem given the diversity of students knowledge, learning behaviour, and goals. This paper presents personalized course recommendation and curriculum design algorithms that exploit logged student data. The algorithms are based on the regression estimator for contextual multi-armed bandits with a penalized variance term. Guarantees on the predictive performance of the algorithms are provided using empirical Bernstein bounds. We also provide guidelines for including expert domain knowledge into the recommendations. Using undergraduate engineering logged data from a post-secondary institution we illustrate the performance of these algorithms.

@sxjscience
Copy link
Member Author

sxjscience commented Apr 11, 2017

Failures of Deep Learning
https://arxiv.org/pdf/1703.07950.pdf
(Shai Shalev-Shwartz, Ohad Shamir, and Shaked Shammah)
Arxiv 2017
Abstract: In recent years, Deep Learning has become the go-to solution for a broad range of applications, often outperforming state-of-the-art. However, it is important, for both theoreticians and practitioners, to gain a deeper understanding of the difficulties and limitations associated with common approaches and algorithms. We describe four families of problems for which some of the commonly used existing algorithms fail or suffer significant difficulty. We illustrate the failures through practical experiments, and provide theoretical insights explaining their source, and how they might be remedied.

See also:
https://simons.berkeley.edu/sites/default/files/docs/6455/berkeley2017.pdf
https://simons.berkeley.edu/talks/shai-shalev-shwartz-2017-3-28

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

No branches or pull requests

3 participants