Recurrent neural networks: How computers learn to read


Fabian Gringel

Applications of Natural Language Processing such as semantic search (Google), automated text translation (e.g. DeepL) or text classification (e.g. email spam filter) have become an integral part of our everyday life. In many areas of NLP, decisive progress is based on the development and research of a class of artificial neural networks that are particularly well adapted to the sequential structure of natural languages: recurrent neural networks, in short: RNNs.

The webinar will give an introduction to the functioning of RNNs and illustrate their use in an example project from the field of legal tech. It will conclude with an outlook on the future importance of RNNs amidst alternative approaches such as BERT and Convolutional Neural Networks.

Explore further


Blog

What is Natural Language Processing (NLP)?

Image of a book shelf.
Blog

Machine Learning Approaches for Time Series

A series of hourglasses
Recorded Talks

Semantic search and understanding of natural text with neural networks: BERT

© unsplash/Markus Spiske