Focus AI: Artificial intelligence supports earthquake relief and dreams up new medicines
Since OpenAI made its product ChatGPT generally available in November 2022, the “generative AI” (“language-creating” artificial intelligence) reached the mark of 100 million users faster than TikTok. However, remarkable other AI advances are easily overlooked in the shadow of unique and sometimes overpromising systems.
Helmut Spudich
What if artificial intelligence didn’t just do relatively mundane things like customer service in call centers, emails from sales reps, or writing tedious school essays for students? Then AI would really start dreaming and come up with new drug formulas that no research team has been able to create before. Or it would support response teams in earthquake zones to be able to provide aid more efficiently.
The speech AI ChatGPT and its “drawing” cousin DALL-E (of which there are now several different providers) are currently acting as an icebreaker for the general spread and popularity of AI software. People have been attributing human qualities to things since ancient times, a process psychologists call “anthropomorphization.” Our interaction with digital devices is particularly predestined for humanization since devices follow our orders in a (prescribed) dialogue or, like “Doctor Google,” provide answers to our questions.
Declaration of Love
In recent weeks, much journalistic and scientific effort has gone into showing how speech AI can go wrong or even rogue. In a multi-hour nightly test that reads like a story by E.A. Poe, “Sydney” (Microsoft’s unpublished project name for the use of AI in Bing) revealed not only its „true“ name but declared its deep love to a journalist of the New York Times, urged him to divorce his wife and made a real “scene” when he resisted. But doesn’t this speak less of an immature technology than of immature human beings whose thoughts have gone into training „Sydney“?
In the shadow of ChatGPT, other AI systems now show what services they can provide in a wide variety of areas. One possible application gained particular relevance due to the earthquake disaster in Turkey and Syria: supporting rescue workers with precise data on the extent of the destruction so that help can be provided as quickly and efficiently as possible.
Damage Visible on Pictures
The open-source project called xView2 was developed by the Software Engineering Institute of Carnegie Mellon University in cooperation with other researchers and funded by the US Department of Defense. With the help of AI (more correctly: through machine learning, in which a neural system searches for and finds the way to solve the problem by itself), xView2 analyzes satellite images from different providers to categorize the damage to buildings and infrastructure in an earthquake area and clusters them depending on the severity of the damage.
So far, xView2 has been used successfully in California and Australia to fight large-scale fires and the recent major flood disaster in Nepal. Fire brigades and UN aid organizations were able to better coordinate their operations on the ground based on quickly created overviews. In Turkey, teams from a UN aid organization identified severely affected areas that they had previously overlooked in their work, based on difficult assessment from eye witness reports and cries for help.
Pixel by Pixel
How does xView2 manage this feat? The AI is based on a technique called “semantic segmentation” used for object recognition. Each pixel in an image is analyzed in relation to its neighboring pixels to draw conclusions. Satellite recordings or recordings from drones are used as a basis for AI assessment. In comparison, human evaluation of these images would take incredibly long and be less precise.
Pharmaceutical developments are another application area where AI shows great potential. In cancer treatment doctors increasingly rely on personalized medicine: specific drugs suited for specific patients and their illnesses. To do this, the most suitable drug for the respective person must be found from many existing medicines on the market. According to MIT Technology Review magazine, new technology for this matchmaking process from the British-based company Exscientia is currently being tested at the Medical University in Vienna.
Looking for New Substances
Tiny tissue samples from the patient containing both healthy and cancer cells are exposed to a wide variety of drug cocktails. This corresponds to the search for an adequate drug for chemotherapy carried out with patients. But instead of putting the patient through the ordeal of long chemo to determine the effectiveness or ineffectiveness of a specific drug, a large variety of medications are tested simultaneously without any burden on the person concerned. Researchers use computer vision to detect changes in healthy and sick cell samples. This AI model is trained to detect even minimal changes in the cells.
Exscientia CEO Andrew Hopkins said this matchmaking between the individual patient, cancer, and the most appropriate drug has yielded promising results. However, this is only half of the problem that Exscientia hopes to solve. In addition to matching patients with the most suitable existing drug, AI should be used to develop previously unknown, optimal drugs for treatment.
Two drugs developed in this way have been clinically tested since 2021, and two more are to be submitted for testing. Exscientia is not the only company in this field. Hundreds of startups are working to accelerate the development of new drugs. It currently takes 10 years and billions of dollars on average to bring a new effective drug onto the market. Despite previous successes, Exscientia warns against exaggerated hopes for the machine: Even if a new drug appears promising in the lab, it must always be tested on humans, where it can fail. It is still early days for drug development by AI.
分享文章: