Why drones and s can’t find the missing flood victims quickly

Search and rescue is not more accurate than people, but it is much faster.

The latest success by applying computer vision and machine learning for drone images to quickly identify the building and several violations after hurricanes or changes in fire lines indicates that artificial intelligence can be valuable in search of missing persons after the flood.

Machine learning systems usually take less than one second to scan a high -resolution image of a drone compared to one up to three minutes. In addition, drones often produce more images that can be viewed than possible humanly critical in the first hours of search, when survivors can still be alive.

Unfortunately, today’s AI systems do not match the task.

We are robotics researchers investigating the use of drones in disasters. Our experience looking for flood victims and many other events shows that the current AI implementation is not involved.

However, this technology can play a role in finding flood victims. The key is Ai-Human’s cooperation.

The drones have become standard equipment of the first respondents, but floods are a unique challenge. Eric Smalley, CC by-nd

Ai potential

The search for flood victims is a type of desert search and rescue that poses unique challenges. The goal of machine learning scientists is to divide into which images contain signs of victims and indicate where the images should be. If the respondent sees the victim’s signs, they pass the GPS location in the picture to search for outdoor commands to check.

The rating is performed by a classifier who is an algorithm who learns to identify similar objects – cats, cars, trees – from training data to recognize those objects in new images. For example, in the context of search and rescue, the classifier would observe examples of human activity, such as garbage or backpack, transmitted desert search and rescue teams, or even identify a missing person.

Due to the large images you can create drones, the classifier is required. For example, one 20 -minute flight can cause more than 800 high -resolution images. If there are 10 flights – few – there would be more than 8,000 images. If the answer is only 10 seconds, looking at each image, it will take more than 22 hours. Even if the task is divided between the “sip” group, people tend to skip the images and show cognitive fatigue.

The ideal solution is the AI system that scans the entire picture, prefers images of the strongest casualties, and emphasizes the image area to view the inner. This could also decide whether the area needs to pay attention to the special attention of the search and rescue crews.

Where is missing

Although it seems to be a great opportunity for computer vision and machine learning, modern systems have a high error level. If the system is programmed to overestimate the number of candidates, in the hope of not missing any casualties, it will probably cause too many false candidates. This would mean overload or, worse, search and rescue teams that should browse the garbage and fight to check the candidates.

Creating a computer vision and machine learning systems to find flood victims is difficult for three reasons.

One thing is that while existing computer vision systems can really recognize people visible from air images, the visual indicators of the flood victim are often very different compared to the lost knee or escape or refugee. Flood victims are often obscured, hidden, entangled in garbage or immersed in water. These visual challenges increase the possibility that existing classifiers will miss the victims.

Secondly, machine learning requires training data, but there is no data on air kits that have been confused by garbage, covered with dirt rather than normal postures. This disadvantage also increases the possibility of classification errors.

Third, many drone images often captured by seekers are diagonal views, not looking straight down. This means that the candidate’s territory GPS location is not the same as the drone GPS location. It is possible to calculate GPS location if the drone height and camera angle are known, but unfortunately these attributes are rare. The exact GPS place means that teams must spend an extra time in search of.

How can he help

Fortunately, when people are working together, search and rescue teams can successfully use existing systems to help narrow and set images for further inspection.

In the case of floods, human remains may be confused between vegetation and debris. Therefore, the system could determine that the garbage clumps are large enough to have the remains. The overall search strategy is to set GPS locations where the Flotsam has collected as victims can be part of the same deposits.

Landscape view from air

Machine learning algorithm found that the piles of garbage were large enough to be bodies with a flooding view from the air. Robot Search and Rescue Center and Maryland University

The AI classifier could find garbage, usually related to remains, such as artificial colors and construction debris with lines or 90 degrees. Respondents find these signs because they systematically walk on river bank and flood plains, but the classifier could help with priority areas in the first few hours and days of survivors and could later confirm that the teams have not missed any interest in interest.

This article has been published from a conversation, non -profit, independent news organizations that provide you with facts and reliable analysis to help you give meaning to our complex world. This was written: Robin R. Murphy, Texas A & M University and Thomas Manzini, Texas A & M University

Read more:

Robin R. Murphy receives funding from the National Science Foundation. It is related to the robotic search and rescue center.

Thomas Manzini is associated with the Robot Support Search and Rescue Center (Crasar) and his works are funded by the National Science Foundation AI Society Decision Institute (AI-SDM).

Leave a Comment