Blog


Legislating AI, what makes it difficult? - A closer look at the AI Act


Liel Glaser


One of the biggest challenges in working on machine learning and AI is the speed at which the field is developing. New articles are published daily, and almost every week, a new model emerges that surpasses existing ones. It is difficult to predict where the next big innovation will arise and how it will be applied. The EU also faced this challenge in crafting the AI Act. How do you write a useful law that can address the misuse of technologies that do not yet exist? To address this the EU decided to adopt a comprehensive definition of AI, focusing on a technology-agnostic description. “AI system" means a machine-based system designed to operate with varying levels of autonomy and that may exhibit adaptiveness after deployment and that, for explicit or implicit objectives, infers, from the input it receives, how to generate outputs such as predictions, content, recommendations, or decisions that can influence physical or virtual environments.” (AI act, Article 1 paragraph 1) In parts, this could also apply to systems already in use that are not generally considered as machine learning or AI. However, it should cover all AI systems. Wide coverage is important in this context since AI is already used in many different fields. AI is likely impacting all European citizens already, and since we cannot put the stochastic genie back in the lamp, we need to find ways to carefully consider our wishes. Disclaimer: This article is the result of one ML Scientist taking a deep dive into EU regulations and trying to interpret and understand them. It should not be taken as legal advice and all errors are definitely my own.

XAI: Nothing certain (with a probability score)


Yana Chekan


The “X” that has recently appeared before the familiar AI abbreviation is not intended to revolutionize the field, nor does it stand for anything specific beyond its literal meaning. Rather, it emphasizes the critical aspect of explainability. eXplainable Artificial Intelligence (XAI) is designing methods to tackle some of the field's longstanding issues while introducing its own set of intriguing research questions. On why XAI is worth drawing your attention (vector;)) to, find in the text below.

Computer Vision

View more ->

Early Classification of Crop Fields through Satellite Image Time Series


Tiago Sanona


In a fast paced and always changing global economy the ability to classify crop fields via remote sensing at the end of a growth cycle does not provide the much needed immediate insight required by decision makers. To address this problem we developed a model that allows continuous classification of crop fields at any point in time and improves predictions as more data becomes available. In practice, we developed a single model capable of delivering predictions about which crops are growing at any point in time based on satellite data. The data available at the time of inference could be a few images at the beginning of the year or a full time series of images from a complete growing cycle. This exceeds the capabilities of current deep learning solutions that either only offer predictions at the end of the growing cycle or have to use multiple models that are specialized to return results from pre-specified points in time. This article details the key changes we employed to the model described in a previous blog post “Classification of Crop fields through Satellite Image Time Series” that enlarges its functionality and performance. The results presented in this article are based on a research paper recently published by dida. For more detailed information about this topic and other experiments on this model please check out the original manuscript: “Early Crop Classification via Multi-Modal Satellite Data Fusion and Temporal Attention” .

Leveraging Machine Learning for Environmental Protection


Edit Szügyi


Machine Learning has been solving complex problems for decades. Just think about how Computer Vision methods can reliably predict life-threatening diseases, self-driving cars are on their way to revolutionize traffic safety, or automatic translation gives us the ability to talk to just about anyone on the planet. The power of Machine Learning has been embraced by many branches of industry and science. There are some areas however where the potential of Machine Learning is harder to see and also less utilized. One of these is environmental protection. Protecting the natural environment is one of the biggest challenges our generation is facing, with pressing issues such as climate change, plastic pollution or resource depletion. Let us now look at how Machine Learning has been and can be used as a tool in environmental protection.


Legislating AI, what makes it difficult? - A closer look at the AI Act


Liel Glaser


One of the biggest challenges in working on machine learning and AI is the speed at which the field is developing. New articles are published daily, and almost every week, a new model emerges that surpasses existing ones. It is difficult to predict where the next big innovation will arise and how it will be applied. The EU also faced this challenge in crafting the AI Act. How do you write a useful law that can address the misuse of technologies that do not yet exist? To address this the EU decided to adopt a comprehensive definition of AI, focusing on a technology-agnostic description. “AI system" means a machine-based system designed to operate with varying levels of autonomy and that may exhibit adaptiveness after deployment and that, for explicit or implicit objectives, infers, from the input it receives, how to generate outputs such as predictions, content, recommendations, or decisions that can influence physical or virtual environments.” (AI act, Article 1 paragraph 1) In parts, this could also apply to systems already in use that are not generally considered as machine learning or AI. However, it should cover all AI systems. Wide coverage is important in this context since AI is already used in many different fields. AI is likely impacting all European citizens already, and since we cannot put the stochastic genie back in the lamp, we need to find ways to carefully consider our wishes. Disclaimer: This article is the result of one ML Scientist taking a deep dive into EU regulations and trying to interpret and understand them. It should not be taken as legal advice and all errors are definitely my own.

Fairness in Machine Learning


Cornelius Braun


In a previous blog post , we explained the plenitude of human biases that are often present in real-world data sets. Since practitioners may be forced to work with biased data, it is crucial to know about ways in which the fairness of model decisions can nevertheless be guaranteed. Thus, in this post, I explain the most important ideas around fairness in machine learning (ML). This includes a short summary of the main metrics to measure the fairness of your model decisions and an overview of tools that can help you guarantee or improve your model's fairness.

Introductions

View more ->

LLM strategies part 1: Possibilities of implementing Large Language Models in your organization


David Berscheid


Large Language Models (LLMs) are a highly discussed topic in current strategy meetings of organizations across all industries. This article is the first part of two, providing some guidelines for organizations to determine their LLM strategy. It will help you identify the strategy with the most benefits while finding ways of solving associated complexities. For more content on LLMs, see our LLM hub .

How ChatGPT is fine-tuned using Reinforcement Learning


Thanh Long Phan


At the end of 2022, OpenAI released ChatGPT (a Transformer-based language model) to the public. Although based on the already widely discussed GPT-3, it launched an unprecedented boom in generative AI. It is capable of generating human-like text and has a wide range of applications, including language translation, language modeling, and generating text for applications such as chatbots. Feel free to also read our introduction to LLMs . ChatGPT seems to be so powerful that many people consider it to be a substantial step towards artificial general intelligence. The main reason for the recent successes of language models such as ChatGPT lies in their size (in terms of trainable parameters). But making language models bigger does not inherently make them better at following a user's intent. A bigger model can also become more toxic and more likely to "hallucinate". To mitigate these issues and to more generally align models to user intentions, one option is to apply Reinforcement Learning. In this blog post, we will present an overview of the training process of ChatGPT, and have a closer look at the use of Reinforcement Learning in language modeling. Also interesting: Our aggregated collection of LLM content .

Natural Language Processing

View more ->

LLM strategies part 1: Possibilities of implementing Large Language Models in your organization


David Berscheid


Large Language Models (LLMs) are a highly discussed topic in current strategy meetings of organizations across all industries. This article is the first part of two, providing some guidelines for organizations to determine their LLM strategy. It will help you identify the strategy with the most benefits while finding ways of solving associated complexities. For more content on LLMs, see our LLM hub .

Extend the knowledge of your Large Language Model with RAG


Thanh Long Phan, Fabian Dechent


Large Language Models (LLMs) have rapidly gained popularity in Natural Language tasks due to their remarkable human-like ability to understand and generate text. Amidst great advances, there are still challenges to be solved on the way to building perfectly reliable assistants. LLMs are known to make up answers, often producing text that adheres to the expected style, but lacks accuracy or factual grounding. Generated words and phrases are chosen as they are likely to follow previous text, where the likelihood is adjusted to fit the training corpus as closely as possible. This gives rise to the possibility that a piece of information is outdated, if the corpus is not updated and the model retrained. Or that it is just factually incorrect, while the generated words have the quality of sounding correct and can be matched to the required genre. The core problem here is that the LLM does not know, what it does not know. In addition, even if a piece of information is correct, it is hard to track its source in order to enable fact-checking. In this article, we introduce RAG (Retrieval-Augmented Generation) as a method to address both problems and which thus aims to enhance the reliability and accuracy of information generated by LLMs.


Extracting information from technical drawings


Frank Weilandt (PhD)


Did you ever need to combine data about an object from two different sources, say, images and text? We are often facing such challenges during our work at dida. Here we present an example from the realm of technical drawings. Such drawings are used in many fields for specialists to share information. They consist of drawings that follow very specific guidelines so that every specialist can understand what is depicted on them. Normally, technical drawings are given in formats that allow indexing, such as svg, html, dwg, dwf, etc. but many, especially older ones, only exist in image format (jpeg, png, bmp, etc.), for example from book scans. This kind of drawings is hard to access automatically which makes its use hard and time consuming. In this regard, automatic detection tools could be used to facilitate the search. In this blogpost, we will demonstrate how both traditional and deep-learning based computer vision techniques can be applied for information extraction from exploded-view drawings. We assume that such a drawing is given together with some textual information for each object on the drawing. The objects can be identified by numbers connected to them. Here is a rather simple example of such a drawing: An electric drill machine. There are three key components on each drawing: The numbers, the objects and the auxiliary lines. The auxiliary lines are used to connect the objects to the numbers. The task at hand will be to find all objects of a certain kind / class over a large number of drawings , e.g. the socket with number 653 in the image above appears in several drawings and even in drawings from other manufacturers. This is a typical classification task, but with a caveat: Since there is additional information for each object accessible through the numbers, we need to assign each number on the image to the corresponding object first. Next we describe this auxiliary task can be solved by using traditional computer vision techniques.

21 questions we ask our clients: Starting a successful ML project


Emilius Richter


Automating processes using machine learning (ML) algorithms can increase the efficiency of a system beyond human capacity and thus becomes more and more popular in many industries. But between an idea and a well-defined project there are several points that need to be considered in order to properly assess the economic potential and technical complexity of the project. Especially for companies like dida that offer custom workflow automation software, a well-prepared project helps to quickly assess the feasibility and the overall technical complexity of the project goals -which, in turn, makes it possible to deliver software that fulfills the client's requirements. In this article, we discuss which topics should be considered in advance and why the questions we ask are important to start a successful ML software project.

Remote Sensing

View more ->

Early Classification of Crop Fields through Satellite Image Time Series


Tiago Sanona


In a fast paced and always changing global economy the ability to classify crop fields via remote sensing at the end of a growth cycle does not provide the much needed immediate insight required by decision makers. To address this problem we developed a model that allows continuous classification of crop fields at any point in time and improves predictions as more data becomes available. In practice, we developed a single model capable of delivering predictions about which crops are growing at any point in time based on satellite data. The data available at the time of inference could be a few images at the beginning of the year or a full time series of images from a complete growing cycle. This exceeds the capabilities of current deep learning solutions that either only offer predictions at the end of the growing cycle or have to use multiple models that are specialized to return results from pre-specified points in time. This article details the key changes we employed to the model described in a previous blog post “Classification of Crop fields through Satellite Image Time Series” that enlarges its functionality and performance. The results presented in this article are based on a research paper recently published by dida. For more detailed information about this topic and other experiments on this model please check out the original manuscript: “Early Crop Classification via Multi-Modal Satellite Data Fusion and Temporal Attention” .

The best (Python) tools for remote sensing


Emilius Richter


An estimated number of 906 Earth observation satellites are currently in orbit, providing science and industry with many terabytes of data every day. The satellites operate with both radar as well as optical sensors and cover different spectral ranges with varying spectral, spatial, and temporal resolutions. Due to this broad spectrum of geospatial data, it is possible to find new applications for remote sensing methods in many industrial and governmental institutions. On our website, you can find some projects in which we have successfully used satellite data and possible use cases of remote sensing methods for various industries . Well-known satellite systems and programs include Sentinel-1 (radar) and Sentinel-2 (optical) from ESA, Landsat (optical) from NASA, TerraSAR-X and TanDEM-X (both radar) from DLR, and PlanetScope (optical) from Planet. There are basically two types of geospatial data: raster data and vector data . Raster data Raster data are a grid of regularly spaced pixels, where each pixel is associated with a geographic location, and are represented as a matrix. The pixel values depend on the type of information that is stored, e.g., brightness values for digital images or temperature values for thermal images. The size of the pixels also determines the spatial resolution of the raster. Geospatial raster data are thus used to represent satellite imagery. Raster images usually contain several bands or channels, e.g. a red, green, and blue channel. In satellite data, there are also often infrared and/or ultraviolet bands. Vector data Vector data represent geographic features on the earth's surface, such as cities, country borders, roads, bodies of water, property rights, etc.. Such features are represented by one or more connected vertices, where a vertex defines a position in space by x-, y- and z-values. A single vertex is a point, multiple connected vertices are a line, and multiple (>3) connected and closed vertices are called polygons. The x-, y-, and z-values are always related to the corresponding coordinate reference system (CRS) that is stored in vector files as meta information. The most common file formats for vector data are GeoJSON, KML, and SHAPEFILE. In order to process and analyze these data, various tools are required. In the following, I will present the tools we at dida have had the best experience with and which are regularly used in our remote sensing projects. I present the tools one by one, grouped into the following sections: Requesting satellite data EOBrowser Sentinelsat Sentinelhub Processing raster data Rasterio Pyproj SNAP pyroSAR Rioxarray Processing vector data Shapely Python-geojson Geojson.io Geopandas Fiona Providing geospatial data QGIS GeoServer Leafmap Processing meteorological satellite data Wetterdienst Wradlib

Software Development

View more ->

Managing layered requirements with pip-tools


Augusto Stoffel (PhD)


When building Python applications for production, it's good practice to pin all dependency versions, a process also known as “freezing the requirements”. This makes the deployments reproducible and predictable. (For libraries and user applications, the needs are quite different; in this case, one should support a large range of versions for each dependency, in order to reduce the potential for conflicts.) In this post, we explain how to manage a layered requirements setup without forgoing the improved conflict resolution algorithm introduced recently in pip. We provide a Makefile that you can use right away in any of your projects!

Project proposals - the first step to a successful ML project


Emilius Richter


Many machine learning (ML) projects are doomed to fail. This can be due to various reasons and often they occur in combination. To avoid failure, all involved stakeholders need to understand the technical and organizational requirements of the project. Besides all preliminary discussions that define the project, it is important to summarize the project-relevant information in a comprehensive proposal. It should cover the technical and organizational requirements, possible problem areas and technical restrictions. In this article, I will describe the most important modules in machine learning project proposals. For a software provider like dida, the project proposal is the first step towards meeting the needs of the customer.

Talks & Events

View more ->


Theory & Algorithms

View more ->

Deep Learning vs Machine Learning: What is the difference?


Serdar Palaoglu


In the realm of artificial intelligence, two fundamental concepts, Machine Learning and Deep Learning, have emerged as key components in the advancement of computer-based learning systems. Machine Learning serves as a foundational principle where computers gain the ability to learn from data without explicit programming. Deep Learning, an evolution within the Machine Learning framework, utilizes artificial neural networks inspired by the human brain to achieve complex data analysis. This article delves into a comprehensive exploration of these domains, elucidating their differences, practical applications, and significance in artificial intelligence.

How ChatGPT is fine-tuned using Reinforcement Learning


Thanh Long Phan


At the end of 2022, OpenAI released ChatGPT (a Transformer-based language model) to the public. Although based on the already widely discussed GPT-3, it launched an unprecedented boom in generative AI. It is capable of generating human-like text and has a wide range of applications, including language translation, language modeling, and generating text for applications such as chatbots. Feel free to also read our introduction to LLMs . ChatGPT seems to be so powerful that many people consider it to be a substantial step towards artificial general intelligence. The main reason for the recent successes of language models such as ChatGPT lies in their size (in terms of trainable parameters). But making language models bigger does not inherently make them better at following a user's intent. A bigger model can also become more toxic and more likely to "hallucinate". To mitigate these issues and to more generally align models to user intentions, one option is to apply Reinforcement Learning. In this blog post, we will present an overview of the training process of ChatGPT, and have a closer look at the use of Reinforcement Learning in language modeling. Also interesting: Our aggregated collection of LLM content .


Managing layered requirements with pip-tools


Augusto Stoffel (PhD)


When building Python applications for production, it's good practice to pin all dependency versions, a process also known as “freezing the requirements”. This makes the deployments reproducible and predictable. (For libraries and user applications, the needs are quite different; in this case, one should support a large range of versions for each dependency, in order to reduce the potential for conflicts.) In this post, we explain how to manage a layered requirements setup without forgoing the improved conflict resolution algorithm introduced recently in pip. We provide a Makefile that you can use right away in any of your projects!

The best (Python) tools for remote sensing


Emilius Richter


An estimated number of 906 Earth observation satellites are currently in orbit, providing science and industry with many terabytes of data every day. The satellites operate with both radar as well as optical sensors and cover different spectral ranges with varying spectral, spatial, and temporal resolutions. Due to this broad spectrum of geospatial data, it is possible to find new applications for remote sensing methods in many industrial and governmental institutions. On our website, you can find some projects in which we have successfully used satellite data and possible use cases of remote sensing methods for various industries . Well-known satellite systems and programs include Sentinel-1 (radar) and Sentinel-2 (optical) from ESA, Landsat (optical) from NASA, TerraSAR-X and TanDEM-X (both radar) from DLR, and PlanetScope (optical) from Planet. There are basically two types of geospatial data: raster data and vector data . Raster data Raster data are a grid of regularly spaced pixels, where each pixel is associated with a geographic location, and are represented as a matrix. The pixel values depend on the type of information that is stored, e.g., brightness values for digital images or temperature values for thermal images. The size of the pixels also determines the spatial resolution of the raster. Geospatial raster data are thus used to represent satellite imagery. Raster images usually contain several bands or channels, e.g. a red, green, and blue channel. In satellite data, there are also often infrared and/or ultraviolet bands. Vector data Vector data represent geographic features on the earth's surface, such as cities, country borders, roads, bodies of water, property rights, etc.. Such features are represented by one or more connected vertices, where a vertex defines a position in space by x-, y- and z-values. A single vertex is a point, multiple connected vertices are a line, and multiple (>3) connected and closed vertices are called polygons. The x-, y-, and z-values are always related to the corresponding coordinate reference system (CRS) that is stored in vector files as meta information. The most common file formats for vector data are GeoJSON, KML, and SHAPEFILE. In order to process and analyze these data, various tools are required. In the following, I will present the tools we at dida have had the best experience with and which are regularly used in our remote sensing projects. I present the tools one by one, grouped into the following sections: Requesting satellite data EOBrowser Sentinelsat Sentinelhub Processing raster data Rasterio Pyproj SNAP pyroSAR Rioxarray Processing vector data Shapely Python-geojson Geojson.io Geopandas Fiona Providing geospatial data QGIS GeoServer Leafmap Processing meteorological satellite data Wetterdienst Wradlib