Primer: Mechanisms for generalized learning across tasks and environments

MIT Media Lab, Massachusetts Institute of Technology

Current approaches to machine learning may often involve tuning an algorithm to perform well on a specific task, and as such do not represent a general method for learning that could be valuable across many different scenarios. This talk will cover a range of techniques for addressing this problem, including multi-task learning, transfer learning, intrinsic motivation in reinforcement learning, and learning from human preferences. We show how multi-task learning can be used to account for a large degree of heterogeneity between individuals and improve performance in predicting mental health outcomes. Transfer learning can be used to combine training on data with reinforcement learning, to both reduce catastrophic forgetting and improve drug discovery algorithms. Finally, I argue that social learning is an important intrinsic motivator, and show how it can be used in both multi-agent systems and to learn from implicit human preferences.

MIA Talks Search