Deep Structured Learning (IST, Fall 2019)


Structured prediction is a framework in machine learning which deals with structured and highly interdependent output variables, with applications in natural language processing, computer vision, computational biology, and signal processing. In the last 5 years, several applications in these areas achieved new breakthroughs by replacing the traditional feature-based linear models by more powerful deep learning models based on neural networks, capable of learning internal representations.

In this course, we will describe methods, models, and algorithms for structured prediction, ranging from "shallow" linear models (hidden Markov models, conditional random fields, structured support vector machines) to modern deep learning models (convolutional networks, recurrent neural networks, attention mechanisms, etc.), passing through shallow and deep methods akin to reinforcement learning. Representation learning will also be discussed (PCA, auto-encoders, and various deep generative models). The theoretical concepts taught in this course will be complemented by a strong practical component, by letting students work in group projects where they can solve practical problems by using software suitable for deep learning (e.g., Pytorch, TensorFlow, DyNet).

Course Information


Project Examples

The course project is an opportunity for you to explore an interesting problem using a real-world dataset. You can either choose one of our suggested projects or pick your own topic (the latter is encouraged). We encourage you to discuss your project with TAs/instructors to get feedback on your ideas.

Team: Projects can be done by a team of 2-4 students. You may use Piazza to find potential team mates.

Milestones: There are 3 deliverables:

All reports should be in NeurIPS format. There will be a class presentation and (tentatively) a poster session, where you can present your work to the peers, instructors, and other community members who will stop by.

See here for a list of project ideas.

Recommended Bibliography


Date Topic Optional Reading
Sep 18 Introduction and Course Description Goodfellow et al. Ch. 1-5
Murphy Ch. 1-2
Sep 23, 27 Linear Classifiers Murphy Ch. 3, 6, 8-9, 14 HW1 is out! Skeleton code.
Sep 30, Oct 4 Feedforward Neural Networks Goodfellow et al. Ch. 6
Oct 7 Representation Learning and Convolutional Neural Networks Goodfellow et al. Ch. 9, 14-15
Oct 11 (room E5!) Neural Network Toolkits (Gonçalo Correia) Goodfellow et al. Ch. 7-8 HW1 is due.
Oct 14 (room Q4.6!) Representation Learning and Convolutional Neural Networks (cc'ed) Goodfellow et al. Ch. 9, 14-15 HW2 is out! Skeleton code.
Oct 18, 21 Linear Sequence Models Smith, Ch. 3-4
Murphy Ch. 17, 19
Project proposal is due.
Oct 25, 28 Recurrent Neural Networks Goodfellow et al. Ch. 10 HW2 is due (Nov 1).
HW3 is out!
Nov 4, 8 Probabilistic Graphical Models Murphy Ch. 10, 19-22
Goodfellow et al. Ch. 16
David MacKay's book, Ch. 16, 25-26
Eric Xing's CMU lecture
Stefano Ermon's notes on variable elimination
Nov 11, 15 Sequence-to-Sequence Learning Sutskever et al., Bahdanau et al., Vaswani et al.
Nov 22 (room V1.10!) Attention Mechanisms and Neural Memories (Ben Peters) Learning with Sparse Latent Structure HW3 is due.
HW4 is out!
Nov 25, 29 Deep Reinforcement Learning (Francisco Melo)
Game of Taxi
Midterm report is due.
Dec 2, 6 Deep Generative Models
Goodfellow et al. Ch. 20
Murphy, Ch. 28
NeurIPS16 tutorial on GANs
Kingma and Welling, 2014
Jan 6 Final report is due.
Jan 10, 13 Final Projects