Article
A Look At AI Benefits And Risks In Global Development Efforts
Article
May 5, 2025
This article was originally published in Law360. Any opinions in this article are not those of Winston & Strawn or its clients. The opinions in this article are the authors’ opinions only.
Artificial intelligence is transforming nearly every aspect of society — from healthcare and education to justice and governance.
As the role of AI grows, it has the potential to enhance inclusivity and bridge gaps in access to essential services, particularly in low-income countries and underserved communities. It can improve medical diagnostics, deliver low-cost, high-quality education and simplify the complexities of the legal system.
But AI also poses risks. Without equitable access and ethical guardrails, it could reinforce biases that marginalize vulnerable populations and deepen the digital divide.
Ethical Benefits of AI and Its Role in Achieving Sustainable Development Goals
Among the many societal benefits that AI can provide, it can play a transformative role in achieving the United Nations' sustainable development goals, which the organization adopted in 2015 as part of its 2030 agenda for sustainable development.
These goals serve as a blueprint for creating a more equitable, prosperous and sustainable world. When designed and implemented responsibly, AI can mitigate biases affecting marginalized communities and improve access to essential services by making these services more efficient and affordable.
Healthcare
In healthcare, AI can enhance access to equitable medical diagnostics, particularly in historically underdiagnosed groups. For example, MIT developed an AI-powered model, called Mirai, capable of detecting breast cancer before it develops.
Importantly, Mirai has been consistent across countries and in both white and Black populations.[1]This is a significant step toward inclusivity in medicine, especially since Black women have a 40% higher breast cancer death rate than white women despite having a lower incidence rate of breast cancer. This disparity is due, in part, to less access to high-quality and timely cancer prevention and early detection resources.
Other AI-powered advances in healthcare include: (1) AI-powered chatbots and virtual assistants, which can help individuals assess their own symptoms and recommend next steps without the need for in-person doctor visits; (2) AI-powered remote diagnostic tools, which can analyze medical images and detect diseases without requiring patients to visit specialists or more advanced clinics; (3) and AI-powered drones, which can deliver medications and vaccines to hard-to-reach locations.
For example, during the COVID-19 pandemic, the government of Ghana used autonomous drones to deliver coronavirus vaccines to remote areas.[2]
Education
In education, AI-driven platforms and tools can provide high-quality, low-cost educational resources to millions of individuals and improve accessibility for marginalized groups. For example, AI-powered adaptive learning platforms and tutoring systems tailor lessons and feedback to users based on their performance, preferences and needs.
They also provide free or low-cost remote learning educational tools and access to knowledge. For example, Khan Academy is a nonprofit online educational organization with a mission to provide a free, world-class education to anyone, anywhere. Through AI, Khan Academy personalizes learning experiences by providing students with a virtual tutor that can answer questions, offer feedback and engage in discussions.[3]
AI-powered tools can also support students with special needs and bridge language barriers. A few practical examples of these uses are text-reading and image description tools, language translation tools, subtitling and transcription tools, and customized learning tools for neurodivergent students.
Justice
In the justice system, AI can play an increasingly important role in assisting users — citizens, lawyers and judges alike — with their specific legal needs. For example, AI can improve access to justice for underserved communities by simplifying and streamlining cumbersome and costly legal processes.
AI-powered legal chatbots and AI-generated plain-language legal explanation tools make laws and policies more understandable, helping individuals understand their rights and navigate the legal system without an attorney.
Similarly, AI-powered automated document generation systems help users draft simple legal documents and fill out legal forms. These tools help bridge the access-to-justice gap by empowering individuals to take control of their legal rights and needs for free and from anywhere.
AI-powered tools can also help legal professionals streamline legal procedures and make the legal system more efficient. By leveraging machine learning and natural language processing, AI can analyze vast amounts of legal data more efficiently than humans, helping lawyers, judges and policymakers make informed decisions at a fraction of the time it would otherwise take them.
For example, some AI applications focus on automating routine legal work, allowing lawyers to allocate more time to complex cases. Legal research automation tools can rapidly sift through legal treatises, case law and statutes, dramatically reducing the time spent on legal research. AI-driven contract analysis software can detect risks, inconsistencies and compliance issues, ensuring greater accuracy in drafting and reviewing agreements.
Predictive analytic tools help lawyers assess the likelihood of success in litigation, allowing them to refine their legal strategies based on data-driven insight. Additionally, AI can assist with e-discovery in litigation, quickly identifying relevant documents and evidence from vast datasets, which could take lawyers weeks or months to review.
AI-driven systems can also improve the efficiency and fairness of court proceedings by assisting judges and litigants. For example, AI can help with case management by categorizing, prioritizing and tracking cases, allowing courts to handle large caseloads more efficiently.
By automating administrative tasks such as scheduling hearings, summarizing case files and organizing evidence, AI can free up judicial time for more complex decision-making. AI can also assist in dispute resolution by predicting case outcomes based on historical rulings, helping both litigants and courts reach faster resolutions.
Several countries have already integrated AI into their judicial systems to aid judges with decision-making. In the U.S., AI systems have been used to help judges establish whether and what amount to set bail or to inform sentencing and parole decisions, thus curbing the personal biases and preferences of individual judges.[4]In China, AI-powered smart courts and AI judges have been handling minor legal disputes online for years.[5]
Ethical Risks of AI
However, as we embrace AI's potential, we must also acknowledge its risks — because if not properly managed, AI could hinder global development efforts and widen already existing gaps within society.
One of the most concerning risks related to the use of AI is its potential to reinforce and even amplify biases. This risk stems from three interrelated crucial factors in the development and deployment of AI: data, algorithmic design and human oversight.
AI systems learn from massive datasets. If the datasets are flawed and contain biased data, the systems are likely to replicate and exacerbate those biases in their decision-making.
Even when the data being analyzed is not flawed per se, the algorithmic design of AI systems can result in biased outcomes. For example, if AI algorithms are designed to prioritize certain variables, e.g., name, zip code or education level, in their decision-making, AI might unintentionally produce outcomes that reinforce systemic biases and stereotypes.
The third factor that can lead to AI bias concerns human oversight at the critical points in the AI decision-making process, the feedback loops. Feedback loops are processes through which AI systems learn. The system receives an input from its environment and processes it using algorithms to produce an output.
The system then receives feedback on its performance, which it uses to adjust its algorithms and improve its output. This loop continues endlessly and allows AI systems to learn and adapt over time based on their interactions with their environment and users.
With no or poor human oversight over this process, AI systems can get stuck in an endless cycle of learning and internalizing biased information and using it to perpetuate discriminatory outputs.
These three potentially flawed factors are particularly concerning when AI is employed in critical areas such as hiring, lending, law enforcement and healthcare.
Examples of AI outcomes displaying bias have been surfacing around the world. In the U.S., a recent study by the University of Washington Information School revealed that popular AI-based resume screening tools often favor white and male candidates by heavily choosing resumes tied to white-associated male names.[6]
In the Netherlands, the Dutch government implemented an AI-driven system, SyRI, to detect and predict welfare fraud. The system was trained on biased data that focused its attention on low-income migrant communities. This resulted in individuals, especially young single mothers with limited knowledge of Dutch, being flagged as high-risk and having their benefits wrongfully reduced.
In 2020, the system was found unlawful by the District Court of The Hague in NCJM and FNV v. The State of the Netherlands, finding it to be discriminatory and a violation of privacy rights.[7]
In the U.K., an AI-powered visa streaming tool deployed by Home Office was found to exhibit racial and national bias by disproportionally flagging applications from certain countries for additional security of rejection.[8]In China, AI-powered facial recognition technology has been criticized for disproportionately misidentifying ethnic minorities, particularly Uyghurs and Tibetans.[9]
Another concern related to AI is its potential to widen the digital divide between developed and developing nations by concentrating technological and economic advantages in wealthy countries.
In fact, while AI can enhance healthcare, education and justice, these benefits are not evenly distributed. Wealthier nations and corporations dominate AI development, which requires powerful computing systems, high-speed internet and vast cloud services that are often not or less available in developing countries.
This phenomenon risks accelerating wealth concentration and economic inequality, which could leave already marginalized communities further behind.
The concentration of AI efforts in limited countries also results in data inequality and exclusion. Most AI systems are Western-centric, which is exemplified by the fact that almost all of them are trained and function in English and a few European languages.[10]
The Western-centric nature of AI also leads to both algorithmic bias, such as hiring algorithms favoring specific demographics, and data-driven bias, such as facial recognition technology performing better on white individuals because it is trained with images from Western regions.
AI also threatens employment and economic stability, especially in economies and countries that rely heavily on labor-intensive industries. Many studies have found that AI is replacing low-skill jobs while creating several new high-skilled jobs. People from wealthier countries and communities can afford to train and learn how to succeed in AI-driven industries.
As a result, skilled workers from developing countries, which are often from wealthy communities, are leaving their countries to move to developed countries to further their job opportunities. At the same time, low-skilled workers are being pushed off the career ladder and left with fewer and fewer opportunities.
Conclusion
In conclusion, AI has the potential to drive remarkable progress worldwide. However, it also presents significant risks and challenges. To ensure a future where AI benefits everyone, it must be developed and deployed responsibly, with ethical guardrails that uphold human rights, fairness and inclusivity.
The conversation on AI's role in our society must continue, with ongoing dialogue, research and policy action to ensure this technology can serve humanity, rather than harm it.
Winston & Strawn law clerk Sofia Vescovo contributed to this article.
[1] Sandy McDowell, Breast Cancer Death Rates Are Highest for Black Women – Again, American Cancer Society (Oct. 3, 2022), https://www.cancer.org/research/acs-research-news/breast-cancer-death-rates-are-highest-for-black-women-again.html.
[2] Ghana Used Instant Delivery to Build a Better Supply Chain. How Will It Change Public Health?, Zipline (last visited Feb 21, 2025), https://www.flyzipline.com/newsroom/stories/articles/ghana-instant-delivery-supply-chain.
[3] Gabriela Duque, What is Khan Academy, The App Set to Bring World-Class AI Learning to All?, eLearn MAGAZINE (June 11, 2024), https://www.elearnmagazine.com/marketplace/khan-academy-2/.
[4] Allyson Brunette, Humanizing Justice: The Transformational Impact of AI in Courts, from Filing to Sentencing, Thomson Reuters (Oct. 25, 2024), https://www.thomsonreuters.com/en-us/posts/ai-in-courts/humanizing-justice/.
[5] Tara Vasdani, Robot Justice: China's Use of Internet Courts, LexisNexis, https://www.lexisnexis.ca/en-ca/ihc/2020-02/robot-justice-chinas-use-of-internet-courts.page (last visited, Feb. 24, 2025).
[6] Amanda Blair & Karen Odash, New Study Shows AI Resume Screeners Prefer White Male Candidates: Your 5-Step Blueprint to Prevent AI Discrimination in Hiring, Fisher Phillips (Nov. 11, 2024), https://www.fisherphillips.com/en/news-insights/ai-resume-screeners.html.
[7] Adamantia Rachovitsa and Niclas Johann, The Human Rights Implications of the Use of AI in the Digital Welfare State: Lessons Learned from the Dutch SyRI Case, 22 Human Rights L. Rev. 1 (2022). Available at: https://doi.org/10.1093/hrlr/ngac010.
[8] Home Office Drops 'Racist' Algorithm from Visa Decisions, BBC (Aug. 4, 2020), https://www.bbc.com/news/technology-53650758.
[9] Paul Mozur, One Month, 500,000 Face Scans: How China Is Using A.I. to Profile a Minority, NY Times (Apr. 14, 2019), https://www.nytimes.com/2019/04/14/technology/china-surveillance-artificial-intelligence-racial-profiling.html.
[10] Assad Abbas, Western Bias in AI: Why Global Perspectives Are Missing, United AI (Jan. 23, 2025), https://www.unite.ai/western-bias-in-ai-why-global-perspectives-are-missing/.