How to Make AI More Inclusive from the Farms to the Fields

By Ahmer Inam, Chief AI Officer
How to Make AI More Inclusive from the Farms to the Fields

Artificial intelligence (AI) is on the cusp of becoming democratic, inclusive, and useful to people living in under-served places in some very exciting ways. That’s a major takeaway from an insightful McKinsey interview with Microsoft Chief Technology Officer Kevin Scott. We believe an approach we call Mindful AI can help AI realize its potential to be more inclusive and human-centered, too.

Details and Implications

McKinsey’s James Manyika interviewed Kevin to discuss concepts related to Kevin’s recently published book, Reprogramming the American Dream: From Rural America to Silicon Valley – Making AI Serve Us All. The book draws on Kevin’s personal experiences to show how AI can become more inclusive by helping people who live in under-served areas ranging from rural towns to working-class communities.

For instance, as reported in The Wall Street Journal, Microsoft’s FarmBeats program uses AI to improve farming. With a machine learning tool that uses weather and hydrology data, Microsoft can help farmers understand the optimal time to put seeds in the ground. In India, 3,000 participating farmers have seen their crop yields increase by as much as 30%, and Microsoft is testing these tools in Washington state.

The podcast interview also covered related topics such as:

  • The democratization of technology access. The rise of self-supervised AI models and open-source tools might democratize AI development. He said that the proliferation of open-source software, cloud computing, and tutorial content on sites such as YouTube conceivably makes it possible for a “a motivated high school student” to solve complex problems in one weekend versus the six months that Kevin needed 14 years ago with a graduate degree in computer science. “All the indicators point to the fact that they’re going to become further democratized over the next handful of years,” he said.
  • The need to guard against the misapplication of AI. For instance, he discussed how bias can creep into the development of AI-based applications (a topic we have blogged about). “Bias in data is something we’ve talked a lot about as a community over the past handful of years, and we now have tools that can detect when a data set has fundamental biases in it,” he said. “We’re using GANs (generative adversarial networks), which are another type of neural network, to generate synthetic data to compensate for those biases, so that we can train on unbiased data sets even though we may not have representative data that is naturally occurring in the data set to help us train the model in the way that we want.”

At Pactera EDGE, we applaud Kevin’s commitment to make AI more inclusive and democratic, and we agree with the need to guard against pitfalls such as AI bias. We believe in a world where AI can and should serve under-represented populations in places ranging from rural Virginia to the plains of Tanzania. To do that, we’ve developed an approach called Mindful AI. We’re using Mindful AI to help businesses develop AI solutions that are more valuable because they are relevant and useful to the people they serve. Mindful AI consists of three components:

  • Human-centered: end-to-end, human-in-the-loop integration in the AI solution development lifecycle, from concept, discovery, data collection, model testing, and training to scaling. We have access to hundreds of thousands of crowdsourced resources worldwide. Our team collectively speaks more than a combined 200+ languages on platform to annotate and curate their data for all their AI training needs. The result is the creation of lovable experiences and products with measurable outcomes.
  • Responsible: ensuring that AI systems are free of bias and that they are grounded in ethics. Being mindful of how, why, and where data is created and their ethical impact on downstream AI systems. We make AI technology more inclusive by working with under-represented communities through the diversity we need for our data programs (age, gender, groups, genre, geographies, ethnicities, cultures, and languages) to help our customers build more inclusive AI-based products and to reduce bias. We also frequently reach out to local organizations representing people representing various under-represented communities
  • Trustworthy: being transparent and explainable in how the AI model is trained, how it works, and why they recommend the outcomes. Our expertise with AI localization makes it possible for our clients to make their AI applications more inclusive and personalized, respecting critical nuances in local language and user experiences that can make or break the credibility of an AI solution from one country to the next. For example, we design our applications for personalized and localized contexts, including languages, dialects, and accents in voice-based applications. That way, an app brings the same level of voice experience sophistication to every language, from English to under-represented languages.

By designing an AI application in a more mindful way from the start, a business will set itself up to be more effective and inclusive in its development of AI applications that help people no matter where they live. Tools such as the Mindful AI Canvas exist to help businesses begin that journey.


AI can potentially burst free from high tech to serve far-flung areas of the world. The democratization of AI learning tools is an important step, and we believe Mindful AI can provide essential help. Contact us to learn more.

For Further Insight

Why Fighting AI Bias Requires Mindful AI.”

How AI Localization Differs from Traditional Localization.”

AI’s Next Personalization Frontier: Data Readiness.”

Introducing the Mindful AI Canvas.”