Contents

Digital public administration: intuitive online access through AI

Natural Language Processing BERT Artificial Intelligence Word Embeddings Public Administration

The following article describes how AI can help to establish digital public administration services. To begin with, a fundamental problem is described that AI can solve at this point: Authorities often speak a language that is very different from the colloquial language. Using the example of business registrations and the AI model "BERT", a possible solution is explained and ideas for further areas of application are shown.

Introduction 

Digital public administration must be low-threshold

The digitisation of public administration is increasingly gaining momentum in Germany. Uniform access to governance services, as provided for in the new Online Access Act, will be an important milestone in this process.


A central concern of the service standard developed for this purpose is the simple and intuitive use of digital services by all citizens. Easy operation of digital products is not only made possible by an intuitive user interface, but also by simple communication between user and service provider. For example, many governance services require applicants to use terms from administrative jargon that are very different from the terms used in everyday language.


Moreover, in the case of digital applications, consultation with the competent authority is considerably more time-consuming than when visiting the authorities on site. Therefore, a language barrier can quickly arise here, which leads to frustration on the part of the applicant and, in the worst case, to a lack of acceptance of the digital service.

AI as a language bridge between everyday and administrative language

AI-based algorithms can remedy this by building a language bridge between everyday and administrative language: Citizens can formulate their concerns or questions in familiar language, the AI then translates them into the appropriate technical terms. In this way, even complex application procedures can be handled digitally in a user-friendly way.


An example of the successful use of AI as a language bridge in the administration is the industry code determination during the business registration process.

Application example: Business registration

Finding the right industry is difficult

A industry code should be given with every application for a business registration. The industry code key is a code that describes the economic sector of the trade. This classification of economic activities into a total of 839 codes is provided by the Federal Statistical Office (destatis). However, assigning the correct key to one's own trade can be very difficult for various reasons:

  • The language used to describe destatis is very different from the language of everyday speech (e.g. "motor vehicles with a total weight of less than 3.5 t" instead of "car").
  • Classification into subclasses is very fine-grained and the correct assignment is therefore sometimes unclear.
AI saves tedious research

In order to support trade offices, dida and a cooperation partner have therefore developed the AI for a chatbot interface, in which a description of one's own trade can be entered, whereupon suggestions for relevant industry codes with description are provided (see picture). For this purpose, an AI algorithm compares the entered continuous text with the descriptions and examples for individual industry codes provided by destatis to determine the categories with the highest match. This reduces the need for consultation with the authorities during the application process and thus the workload for citizens. At the same time, the burden on the authorities is reduced. Since the algorithm only provides suggestions but does not make a final selection, full control over the process is still in the hands of the human being.

A versatile technology in the background: BERT

The construction of such a language bridge is made possible by the use of an AI architecture specialized in language processing, the Google-developed neural network BERT.


What makes BERT special is that it is able to capture the meaning behind different words and to include their context in the calculation of its results. For the example of the industry codes this means, among other things, that BERT automatically assigns the search term "printing of fabric bags" to the matching industry code "finishing of textiles", although neither "fabric bag" nor "printing" appear in the corresponding description text. Other word processing algorithms would be helpless with such an assignment. For example, the classic full-text search with the help of synonym lists requires time-consuming manual maintenance of these lists in order to obtain a comparable result. At present, more advanced procedures are often used as search methods, which perform a statistical weighting of individual words (e.g. TF-IDF). Such search methods are used, for example, in Wikipedia, but even these methods cannot independently recognize synonymous words and expressions, nor can they include the context of search terms. These are important prerequisites for overcoming language barriers.


In order for BERT to develop its special kind of language skills, it is trained in a two-step procedure.

Building up an understanding of language

First, the model learns a kind of general understanding of a particular language: large, freely available data sets such as Wikipedia articles or book collections can be used for this purpose. BERT learns according to the maxim "words are characterised by their accompaniment", because the meaning of a word is strongly influenced by its context (also known as distributional hypothesis). Based on the large text collections, BERT can thus learn to recognise semantically similar words and expressions by considering their context. This is a relatively complex process, which takes a long time and expensive resources (with the help of very powerful hardware specialized for this task, the calculations take several days). The advantage is that this part of the training only needs to be carried out once for each language: Once this training has been carried out for one language, all approaches to solving concrete problems in this language can build on the existing training level. Since the corresponding models are freely available, you normally do not have to perform this complex step yourself.

Specialization on a concrete task

In the second step, the model is now specialized for a specific problem: For this purpose, problem-specific, annotated data sets are then also required. In the case of the application described above for determining the industry code, for example, this was a large data set consisting of historical trade registrations with the corresponding industry code. The model was then trained to assign the corresponding industry code to individual trade registrations.


This specialization step has to be repeated for each problem, but the training is much faster than the first step and requires fewer resources. This is because BERT can benefit greatly from the general "language understanding" acquired in the first step, making BERT a very versatile model that can be used for different language processing problems.

Limitations and alternatives

However, there are also factors which may lead to the conclusion that BERT is not the optimal model and that other machine learning models or methods are better suited to solve a particular problem:

  • Data set size: As the second training step of BERT is also based on deep learning methods, a certain minimum amount of annotated data is required. If only a very small amount of annotated data is available, it may be useful to use methods that do not necessarily have to be trained with annotated data (e.g. TF-IDF, word2vec, fasttext) or rule-based algorithms (e.g. boolean search, fuzzy matching).
  • Number of documents to be searched: BERT needs a certain time to perform calculations for individual documents. If the number of documents or database entries to be searched becomes very large, it may be useful to combine BERT with another machine learning method. Using a much faster, but also less accurate method, a pre-selection could be made, the results of which would then be improved by BERT. This is especially true if the speed of the algorithm plays an important role.

Many areas of digital administration hold great potential for AI

The application possibilities of AI as a language bridge are of course not limited to business registration. Other fields in which support of public administration services by AI is conceivable include

  • Search for information
  • Answering questions
  • Extraction of information from application forms

If you have any questions about the article or about AI in administration, please contact us!