How Artificial Intelligence Can Help Fight the Next Pandemic

By Jonas Ryberg Chief Globalization Officer
Lab Worker

COVID-19 is not ending anytime soon. It may very well be just beginning. The virus continues to mutate and cause infection spikes. And as of this writing, 30 percent of the world’s population remains unvaccinated. (In Africa, the rate of vaccination was 35.92 per 100 people by 2022, according to Statista, compared to 195.8 per 100 in the European Union.) Moreover, there will be more pandemics. Fighting COVID-19 and the next pandemic will require even more global cooperation, and the application of technology such as artificial intelligence (AI) is essential to that fight, too.  That’s one of the main take-aways at the annual meeting of the World Economic Forum known as Davos.

At Davos, the world’s leaders from multiple sectors (ranging from politics to science) share insights into solutions to the world’s problems. And speakers have had a full plate of problems to talk about, ranging from economic uncertainty to global warming. The pandemic remains very much on their minds – both its long-term risk to our health and also its economic consequences. And yet, there is plenty of reason to believe that the world is turning a corner.

Pratap Khedkar, chief executive officer, ZS, discussed the role of AI in making remarkable breakthroughs for healthcare. He said that in the pharmaceutical industry, AI-driven automation of therapeutic compound screening has accelerated the ability to develop medicines. Rather than humans conducting a few hundred or thousand lab assays to discover potential new medicines, researchers are conducting millions by simulating chemistry with computers, identifying more compounds that could pass the regulatory process.

This kind of advancement can make the response to a pandemic and the development of vaccines for future mutations happen faster and at a broader scale.

Alice Gast, president, Imperial College London, said that one reason Moderna quickly developed a COVID-19 vaccine is that it was able to use AI to speed up development. AI algorithms and robotic automation helped Moderna move from manually producing around 30 mRNAs (a molecule fundamental to the vaccine) each month, to being able to produce around 1,000 a month.

These breakthroughs are compelling, and the world is safer right now as a result. And yet, we can do better.

I believe something else needs to happen for AI to make a difference in fighting both the COVID-19 pandemic and future pandemics: the application of AI must be mindful.

Mindful AI means developing AI-based products that put the needs of people first. Mindful AI considers especially the emotional wants and needs of all people for which an AI product is designed – not just a privileged few. When businesses practice mindful AI, they develop AI-based products that are more relevant and useful to all the people they serve. Those products include medicines and vaccines developed with the use of AI to collect, process, and synthesize vast amounts of data faster and more effectively than human beings can on their own.

But to be mindful, anything developed with AI must be human-centered and inclusive or else large segments of the world’s populations will be left out when healthcare treatments are devised using AI.

It’s no secret that the development of vaccines and medical treatments can be hampered by bias – such as when vaccines and drugs are developed without taking into account their use by people of color, people who live in different climates (such as Africa), and other variables. (For example, the environment where one lives may have a major effect on drug metabolism and disposition.)

Unfortunately, AI can actually make bias even worse. Why? Because when companies use AI to do crucial tasks such as test data at scale, the AI needs to be trained to know what to test and how get better at the job faster than human beings can on their own. Who trains AI? People. And people possesses bias.

Being mindful begins with data collection or generation, curation, and annotation to ensure that potential biases are addressed prior to the data being used in training AI algorithms. Ironically, people are also needed to mitigate against bias creeping into AI. Organizations developing vaccines and treatments need to rely on a diverse pool of resources to act as a check and balance against each other when they collaborate on the development of healthcare solutions fueled by AI.

Businesses are learning how to do this in other fields. For example, at Pactera EDGE, we help businesses develop products that use AI technologies such as voice assistants by tapping into a diverse and skilled global workforce that comes from backgrounds as diverse as our global society. Our community of collaborators encompasses different ethnicities, age groups, genders, education levels, socio-economic backgrounds, and locations among many other elements of a plural world.

Checks and balances are only part of the solution to making AI mindful. For instance, mindful AI needs a single data collection and curation platform to train AI-based applications. A single platform shared by a diverse team delivers benefits such as ensuring that data sampling criteria and training data preparation reflect existing social and cultural paradigms. A single platform helps ensure that algorithm tuning includes de-biasing models and potential impact as core requirements. Doing so requires being mindful of the data DNA and being purposeful about the intended outcomes.

Fortunately, Mindful AI is not just a concept; it’s also a process with tools such as the Mindful AI Canvas help product design teams conceive of solutions from the start with people at the center.

By putting people at the center and being inclusive, the world’s leaders will be far more effective fighting both the COVID-19 pandemic and the next one. And AI will prove its value once again to society. Contact Pactera EDGE to discuss how we can help.

Photo by Ani Kolleshi on Unsplash