07 Nov, 2019
18:30 talk 1
19:30 short break
19:45 talk 2
Talk 1: Olaf de Leeuw
Reinforcement Learning is a category of Machine Learning which has, in contrary to supervised learning, no labeled data. It is all about sequential decision making. Possibly in a complex environment with a lot of options. There is no known outcome upfront to match your predictions, but there is a puzzle to solve or a game to win. The fundaments of reinforcement learning are built on some probability theory, especially te Bellman equation.
I explain these mathematical concepts on the basis of an easy introductory example of a reinforcement learning algorithm. Once finished this example we will immediately continue with a real world case I’m working on at a dutch electricity network operator. There I’m trying to develop a reinforcement learning algorithm to support engineers in designing the optimal electricity net for new neighborhoods. I will show the challenges I’m facing and discuss a limitation of the easy example in my introduction.
Mathematical and Machine Learning concepts in my talk are: Bellman equation, expected value, explore-exploit, learning rate and gradient descent.
Takeaway is a notebook with the introductory example.
Speaker: Olaf de Leeuw
Talk 2: Transfer Learning for language tasks
Today, more and more application of Natural Language Processing turn up. Think about recommender engines based on texts, and live chatbots. But also very ancient tasks like classifying spam in e-mails. Neural networks allow us to train complex models for these tasks, which is great! However, these models require a lot of data for training. Obviously, there is plenty of text to be found on the internet. But what if you have a more specific task in mind (as you usually do), and your data set is not that big? Well, in this case (amongst others), transfer learning may help you out. In this talk, I will use one of the projects I have worked on to demonstrate how transfer learning works. I will show you how to use a pre-trained model and use that to train your own model. All on natural language data with a limited data set. Yet we get a reasonable model! After this talk, you will be able to apply the same principles.
Speaker: Maarten van Duren