Deep Structured Learning (IST, Fall 2020)
Summary
Structured prediction is a framework in machine learning which deals with structured and highly interdependent output variables, with applications in natural language processing, computer vision, computational biology, and signal processing. In the last 5 years, several applications in these areas achieved new breakthroughs by replacing the traditional feature-based linear models by more powerful deep learning models based on neural networks, capable of learning internal representations.
In this course, we will describe methods, models, and algorithms for structured prediction, ranging from "shallow" linear models (hidden Markov models, conditional random fields, structured support vector machines) to modern deep learning models (convolutional networks, recurrent neural networks, attention mechanisms, etc.), passing through shallow and deep methods akin to reinforcement learning. Representation learning will also be discussed (PCA, auto-encoders, and various deep generative models). The theoretical concepts taught in this course will be complemented by a strong practical component, by letting students work in group projects where they can solve practical problems by using software suitable for deep learning (e.g., Pytorch, TensorFlow, DyNet).
Course Information
- Instructor: André Martins
- TA: Marcos Treviso
- Schedule: The classes are held on Wednesdays 14:00-17:00 remotely (Zoom link provided in Piazza)
- Communication: piazza.com/tecnico.ulisboa.pt/fall2020/pdeecdsl
Grading
- Homework assignments (60%)
- Final project (40%)
Project Examples
The course project is an opportunity for you to explore an interesting problem using a real-world dataset. You can either choose one of our suggested projects or pick your own topic (the latter is encouraged). We encourage you to discuss your project with TAs/instructor to get feedback on your ideas.
Team: Projects can be done by a team of 2-4 students. You may use Piazza to find potential team mates.
Milestones: There are 3 deliverables:
- Proposal: A 1-page description of the project. Do not forget to include a title, the team members, and a short description of the problem, methodology, data, and evaluation metrics. Due on 21/10.
- Midway report: Introduction, related work, details of the proposed method, and preliminary results if available (4-5 pages). Due on 25/11.
- Final report: A full report written as a conference paper, including all the above in full detail, finished experiments and results, conclusion and future work (8 pages excluding references). Due on 8/1.
All reports should be in NeurIPS format. There will be a class presentation and (tentatively) a poster session, where you can present your work to the peers, instructors, and other community members who will stop by.
See here for a list of project ideas.
Recommended Bibliography
- Deep Learning. Ian Goodfellow and Yoshua Bengio and Aaron Courville. MIT Press, 2016.
- Machine Learning: a Probabilistic Perspective. Kevin P. Murphy. MIT Press, 2013.
- Introduction to Natural Language Processing. Jacob Einsenstein. MIT Press, 2019.
- Linguistic Structured Prediction. Noah A. Smith. Morgan & Claypool Synthesis Lectures on Human Language Technologies. 2011.