# Advanced Topics in Deep Learning

The seminar language will be English (even if everyone is German-spoken, to practice presentation skills in English). The first meeting takes place on April 28th.

For projects, have a look at our open projects site.

## Recommended Background Knowledge

- Deep learning lab course
- Machine learning or statistical pattern recognition
- Reinforcement learning

## Procedure

- Link to enroll through HISinOne
- Link to ILIAS course to submit summaries, questions and reports

We will meet weekly (Friday 14-00-16:00, in building 106, room SR 00-007) to discuss research papers from the list of available papers below. Every week, one student presents a paper and leads the following discussion. All other participants read the paper and submit a summary, answer a given question and ask 1-3 questions due the Monday before. The presenter will have access to the questions and is expected to take them into account during the presentation. All participants will discuss the paper, its merits, and limitations. This discussion will, in part, be guided by the questions submitted by the participants. At the end of the week the presenter hands in two page report on the paper. The final grade takes the oral presentation and the written report into account.

The final grade takes the oral presentation, the written report, the quality of summaries and questions submitted, and class participation into account.

Besides the seminar topic, you will learn several skills necessary not only in academia:

- read and understand research papers
- assessing the strengths and weaknesses
- oral presentation in front of your peers
- discussion with your peers
- high level summary of research with which you are not intimately familiar.

### What to put into the final report?

In a nutshell, we think of this report as a detailed summary of the paper you presented that also covers points that would come up in a research discussion about the paper. (We say "paper", here even though, you are not restricted to only write about the paper you present.)

E.g., some questions that should be discussed at some point in the report next to a detailed summary are the following:

- What is the paper's main contribution and why is it important?
- How does it relate to other techniques in the literature?
- What are strong and what are weak points about the paper?
- What would be interesting follow-up work? Any possible improvements in the methods? Any further interesting applications?
- Is the code/data available online? Does it run off-the-shelf? If not, what problems are there with running it? (You should only put a limited amount of time into this; not longer than a full work day.)

### Formatting and length of the final report

Final reports have to be typeset in LaTeX (sorry, but you were warned). We will use the formatting guidelines and electronic templates from the AI conference IJCAI. Reports don't have to be long (you already wrote all the paper summaries); 2 pages in IJCAI style are appropriate. Do not go beyond 4 pages - you might not be able to include everything you would like to include, but that is common in academic writing.

## List of available papers

### RNNs

- “Deep Learning” (Book); Chapter: “10 Sequence Modeling: Recurrent and Recursive Nets”, 47 pages, presented by 2 students
- “Hybrid computing using a neural network with dynamic external memory” by A. Graves et al

### Deep-RL

- “Reinforcement learning of motor skills with policy gradients” by J.Peters and S. Schaal
- “Playing Atari with Deep Reinforcement Learning” by V. Mnih et al
- “Mastering the game of Go with deep neural networks and tree search” by David Silver et al
- “Deterministic Policy Gradient Algorithms” by D. Silver et al
- “Continuous Control with Deep Reinforcement Learning” by T. Lillicrap et al
- “Asynchronous Methods for Deep Reinforcement Learning” by V.Mnih et al

### Learning to learn

- "Learning to learn by gradient descent by gradient descent" by M. Andrychowicz
- "Learning to Optimize" by K. Li and J. Malik
- “Learning to Learn for Global Optimization of Black Box Functions” by Y. Chen et al

### Unsupervised methods

- “Auto-Encoding Variational Bayes” by D. Kingma and M. Welling
- “Generative Adversarial Nets“ by I. Goodfellow et al

### AutoML

- “Neural Architecture Search with Reinforcement Learning” by B. Zoph , Q. Le
- “Bayesian Optimization with Robust Bayesian Neural Networks“ by J. Springenberg

For questions, please email us: sfalkner@cs.uni-freiburg.de, lindauer@cs.uni-freiburg.de, fh@cs.uni-freiburg.de