AI and drones. This unique combination can make big waves in your hangar, through which you can work even more efficiently. That’s not the future, but the practice Mainblades is currently working towards. No idea how to imagine that? Let us explain it to you.
What is computer vision?
Computer Vision (CV) is an application of Artificial Intelligence (AI) that trains computers to identify, interpret and track objects in photos and videos. The technology is primarily driven by recognizing patterns that repeat themselves over a given set of data. In Mainblades’ case that means identifying different kinds of damages on the fuselage of aircraft. To do that reliably, algorithms need to be trained by feeding large quantities of images with labeled objects. This allows the algorithm to establish a unique profile for each object. If this is repeated often enough it is eventually able to identify the objects in unlabelled images.
How does Mainblades’ computer vision work?
Despite the recent hypes about advances in AI, ML algorithms still heavily rely on the experience and interaction of humans. Therefore, we at Mainblades centre our algorithms around the collaboration between our airline and MRO partners. Having humans in the loop allows us to continuously improve the algorithms with less data and react quickly to new situations and use cases. This inclusion of human feedback is called Active Learning which is the core of Mainblades’ R&D philosophy for machine learning.
A 3-step procedure
End users such airlines’ engineering workforces directly contribute to improving the accuracy of the ML performance. This is a multi-step procedure. In the first step predictions are pre-generated with the help of the AI algorithm. This is possible because an initial dataset of 15.000 images showing damages on aircraft was labelled with the help of an in-house developed labelling interface. Labelling in this case concretely means adding information to the images about what the images contain. A label consists of a list of so-called bounding boxes (highlighted orange in above picture).Each bounding box is a rectangle that contains an object with the name of the object. Based on these labels, the models are able to learn to do predictions and tell the users about the content of the pictures.
In the second step the aircraft inspectors get involved. Once a full drone inspection has been performed, the operator has immediate access to the suggested bounding boxes and found damages via the Flight iPad application. All of this happens in the background while the drone is conducting the inspection. Users now need to review the predictions to find potential mistakes. This review is then being used to evaluate the model performance and generate new data to improve the next model version. The review makes sure that the final decision will always be made by a human operator with expert knowledge.
The third and last step is verification by our in-house data science team. Verification is necessary to ensure the machine learning models are fed not only correct, but also with valid and consistent data. The verified annotations in turn will be fed back to the training algorithm to return an improved model. Over time, the machine learning will keep improving with the feedback from the reviews, which will decrease the time necessary to perform the review.
Current state and way forward
Since 2020, our teams have been building the company workflows and policies as well as the technical infrastructure to collect, process and validate large datasets to make the interplay of AI and drones a reality. In the meantime, our infrastructure has significantly matured thanks to the support from partner airlines and MROs. With the Mainblades Flight App engineers can now see pictures of the entire aircraft, review damages and compare the data with existing datasets. When our AI-based algorithm has detected damages, it will provide immediate actionable insights. The inspection report indicates the type of damage, its size as well its location according to the station & stringers format from repair manuals by Boeing and Airbus. Our models have been trained to recognize the most common damages specified in the OEMs repair manuals, such as dents, scratches, corrosion, lightning strikes of up to 2mm in size.
AI and drones – Opportunities for airlines and MROs
By using a combination of currently available and well-understood AI techniques within a flexible architecture, and fast iteration cycles, Mainblades is already able to reach a high degree of automated damage detection for use cases such as lightning strikes and GVIs. However, to realize the full potential of machine learning, trust needs to be established in the development, deployment and actual use of AI. That’s where you come into play. If you are looking to partner with us to see where automated drone inspections with ML-enabled damage detection can take you and your business, we propose the following course of action:
Communicate to us what you are looking for. For example: what damage types do you want to recognize?
Have you already worked with AI/ML techniques before? Do you have a large amount of data but no way of extracting value out of it?
During a trial period, let us conduct inspection flights in your hangar. We will collect and process data and make sure we have all the examples we need to finetune our algorithms to your application and use case. We will also look at existing data and devise a strategy together of how to put that data to use.
With the data from the trial period, we will generate a report on the accuracy and performance of our algorithms on your data. This can serve as the basis for a Go/No-Go decision for further cooperation to finetune the interplay of AI and drones.
If you choose to continue with us, you can use the detection algorithms and we will continue to improve them with the new data of continued inspection as well as your feedback.
Get in touch with our team today!