Deep Structured Learning (IST, Fall 2019)
Structured prediction is a framework in machine learning which deals with structured and highly interdependent output variables, with applications in natural language processing, computer vision, computational biology, and signal processing. In the last 5 years, several applications in these areas achieved new breakthroughs by replacing the traditional feature-based linear models by more powerful deep learning models based on neural networks, capable of learning internal representations.
In this course, we will describe methods, models, and algorithms for structured prediction, ranging from "shallow" linear models (hidden Markov models, conditional random fields, structured support vector machines) to modern deep learning models (convolutional networks, recurrent neural networks, attention mechanisms, etc.), passing through shallow and deep methods akin to reinforcement learning. Representation learning will also be discussed (PCA, auto-encoders, and various deep generative models). The theoretical concepts taught in this course will be complemented by a strong practical component, by letting students work in group projects where they can solve practical problems by using software suitable for deep learning (e.g., Pytorch, TensorFlow, DyNet).
- Instructors: André Martins and Vlad Niculae
- Schedule: The classes are held on Mondays 9:30-11:00 and Fridays 15:00-16:30 in Room LT2 (North Tower, 4th floor)
- Communication: piazza.com/tecnico.ulisboa.pt/fall2019/pdeecdsl
- Homework assignments (60%)
- Final project (40%)
The course project is an opportunity for you to explore an interesting problem using a real-world dataset. You can either choose one of our suggested projects or pick your own topic (the latter is encouraged). We encourage you to discuss your project with TAs/instructors to get feedback on your ideas.
Team: Projects can be done by a team of 2-4 students. You may use Piazza to find potential team mates.
Milestones: There are 3 deliverables:
- Proposal: A 1-page description of the project. Do not forget to include a title, the team members, and a short description of the problem, methodology, data, and evaluation metrics. Due on 18/10.
- Midway report: Introduction, related work, details of the proposed method, and preliminary results if available (4-5 pages). Due on 25/11.
- Final report: A full report written as a conference paper, including all the above in full detail, finished experiments and results, conclusion and future work (8 pages excluding references). Due on 6/1.
All reports should be in NeurIPS format. There will be a class presentation and (tentatively) a poster session, where you can present your work to the peers, instructors, and other community members who will stop by.
See here for a list of project ideas.
- Deep Learning. Ian Goodfellow and Yoshua Bengio and Aaron Courville. MIT Press, 2016.
- Machine Learning: a Probabilistic Perspective. Kevin P. Murphy. MIT Press, 2013.
- Linguistic Structured Prediction. Noah A. Smith. Morgan & Claypool Synthesis Lectures on Human Language Technologies. 2011.
|Sep 18||Introduction and Course Description||
Goodfellow et al. Ch. 1-5
Murphy Ch. 1-2
|Sep 23, 27||Linear Classifiers||Murphy Ch. 3, 6, 8-9, 14||HW1 is out! Skeleton code.|
|Sep 30, Oct 4||Feedforward Neural Networks||Goodfellow et al. Ch. 6|
|Oct 7||Representation Learning and Convolutional Neural Networks||Goodfellow et al. Ch. 9, 14-15|
|Oct 11 (room E5!)||Neural Network Toolkits (Gonçalo Correia)||Goodfellow et al. Ch. 7-8||HW1 is due.|
|Oct 14 (room Q4.6!)||Representation Learning and Convolutional Neural Networks (cc'ed)||Goodfellow et al. Ch. 9, 14-15||HW2 is out! Skeleton code.|
|Oct 18, 21||Linear Sequence Models||
Smith, Ch. 3-4
Murphy Ch. 17, 19
|Project proposal is due.|
|Oct 25, 28||Recurrent Neural Networks||Goodfellow et al. Ch. 10||
HW2 is due (Nov 1).
HW3 is out!
|Nov 4, 8||Probabilistic Graphical Models||
Murphy Ch. 10, 19-22
Goodfellow et al. Ch. 16
David MacKay's book, Ch. 16, 25-26
Eric Xing's CMU lecture
Stefano Ermon's notes on variable elimination
|Nov 11, 15||Sequence-to-Sequence Learning||Sutskever et al., Bahdanau et al., Vaswani et al.|
|Nov 22 (room V1.10!)||Attention Mechanisms and Neural Memories||Learning with Sparse Latent Structure||HW3 is due.
HW4 is out!
|Nov 25, 29||Reinforcement Learning||Midterm report is due.|
|Dec 2, 6||Deep Generative Models||
Goodfellow et al. Ch. 20
Murphy, Ch. 28
NeurIPS16 tutorial on GANs
Kingma and Welling, 2014
|Jan 6||Final report is due.|
|Jan 10, 13||Final Projects|