© unsplash/@levi_midnight

© unsplash/@levi_midnight

Toggle view

Home › 
Projects › 
Defect detection in manufacturing

Defect detection in manufacturing

AI-supported optical defect detection for semiconductor laser production

Case Study
Computer Vision

General

Challenges

The data is divided into two broad categories; facet images showing a cross section of the semiconductor and p-side images showing it from above (see image above and below). Each of these has a different set of challenges. In particular the p-side data has a complex etched structure that makes conventional computer vision difficult, while in the facet data defects can vary significantly in size and colour.

Example of laser diode facet inspection with different types of defects.

Example of p-side inspection with different severity of defects (FBH image).

General

Solution

We use separate models for each class of data, both of which are convolutional neural network based. These models detect and classify the defects.

The output of the models will be passed to a rules based system that determines the severity of the defects based on their class and location within the image. The final result is returned both as an image with the regions highlighted and a csv file containing the classes and locations of the defects.

Technical

Challenges

There are a number of difficulties unique to this project compared with other computer vision projects:

  • Bimodal data: We have two disjoint datasets: p-side and facet which share some but not all possible defects.

  • Scale of defects: these can range from a few pixels to significant proportions of the image which can be thousands of pixels across.

  • Background structure: the p-side in particular has an elaborate etched structure that makes using classical computer vision techniques such as line detection and thresholding difficult. The structure can change between examples so removing it with a manually programmed pipeline is not practical.

  • Background colour changes: The colour and brightness of the image is dependent on the material used. Future images may differ from any example in the training set.

  • Location based severity: The severity of defects differs depending on where they are situated. Some defects that may make a sample unusable if they are in the active zone may be neglected if they are away from the emitter.

  • Deployment: The finished software will have to be deployed in a number of diverse environments including integration with systems such as LabView.

Technical

Solution

The project is still ongoing and so the solution is not finalised but a general overview can be discussed here. Each subset of the data will use its own model which will be a convolutional neural network implemented in pytorch.

Extensive data augmentation is required to ensure that the models are robust to changes in the input distribution as the same models are likely to be applied to new materials. This includes geometric transformations (rotations / flips) and colour transformations.

Once the model has been applied to an image a set of rules based criteria is applied. This will determine the severity of a given defect and whether the sample is usable as a whole.

Get quarterly AI news

Receive news about Machine Learning and news around dida.

Successfully signed up.

Valid email address required.

Email already signed up.

Something went wrong. Please try again.

By clicking "Sign up" you agree to our privacy policy.

dida Logo
Book ML Talk