Статья опубликована в рамках: Научного журнала «Студенческий» № 4(342)
Рубрика журнала: Социология
Скачать книгу(-и): скачать журнал часть 1, скачать журнал часть 2, скачать журнал часть 3, скачать журнал часть 4, скачать журнал часть 5, скачать журнал часть 6
THE AI PARADOX: CAN ARTIFICIAL INTELLIGENCE TRULY ADVANCE SDGS WITHOUT WIDENING INEQUALITY?
ABSTRACT
Artificial intelligence is heralded as a key tool for achieving the UN's global Sustainable Development Goals (SDGs), capable of providing humanity with the missing momentum. Yet, behind this universal potential lies a profound paradox: the same technology can act as both a powerful ally and a systemic threat.
Keywords: Artificial Intelligence (AI), Sustainable Development Goals (SDGs), digital inequality, achievements.
The 17 Sustainable Development Goals (SDGs) represent a global compact humanity has made with itself and with the future. Adopted by all United Nations member states, they chart the course until 2030 towards a world free of poverty and inequality, towards a prosperous and healthy planet [1]. These goals are for everyone: for powerful economies and developing communities alike. Achieving them requires the mobilization of all forces—from the UN and governments to businesses, cities, and every one of us. No one has the right to remain on the sidelines. Yet, time is passing, and progress is falling behind schedule. The world urgently needs a breakthrough tool capable of accelerating progress across all 17 fronts simultaneously. Many have proclaimed artificial intelligence to be such a tool. But can it become a true ally in this humanistic mission, or does its implementation harbor the risk of deepening the very problems we seek to solve, creating a new, technological form of inequality?
In the field of healthcare, AI acts as a powerful amplifier of human capabilities, addressing the shortage of specialists and improving diagnostic accuracy. A vivid illustration is the company Zebra Medical Vision. Its deep learning technologies analyze medical images (X-rays, CT scans), detecting pathologies—from lung cancer to osteoporosis—with high speed and precision. Unlike early computer-aided detection systems of the 1990s, which underperformed compared to radiologists in interpreting mammograms, Zebra's modern neural networks can identify clinically significant details in scans that are imperceptible to the human eye [2]. This not only alleviates the burden on doctors amidst growing diagnostic demand but also enhances early disease detection in regions with a shortage of qualified radiologists, directly contributing to SDG 3 (Good Health and Well-being).
Singapore demonstrates how strategic state governance can direct AI toward systematically solving social challenges. Under the "AI Singapore" program and the National AI Strategy, the country focuses on five key sectors directly related to the SDGs: healthcare (predicting chronic diseases), education (personalized learning), transport (smart logistics), municipal services, and border control. Singapore's Model AI Governance Framework, endorsed by the World Economic Forum, is built on principles of transparency, fairness, and being human-centric [3]. This is an example of how creating a trustworthy legal environment and fostering public-private partnerships enable the use of AI as a tool to improve quality of life and achieve specific sustainable development goals.
While AI in medicine, as seen with Zebra Medical Vision, aims to expand access to diagnostics, its application in other spheres can have the opposite effect. Amazon, in attempting to automate recruitment, trained an algorithm on a dataset of its employees' resumes over 10 years. Since men historically dominated technical roles at the company, the AI interpreted this not as a statistical fact but as a preferred model. It began penalizing resumes containing the word "women's" and ignoring graduates of women's colleges, systematically favoring male candidates. After three years, the discriminatory algorithm had to be scrapped [4]. This case is clear proof that irresponsible AI implementation does not eliminate human biases but instead turns them into a scalable and opaque mechanism of exclusion, deepening gender inequality in the labor market.
An even more alarming case is the use of the COMPAS (Correctional Offender Management Profiling for Alternative Sanctions) system in US courts to assess the likelihood of a defendant's recidivism. Research revealed that the algorithm systematically overestimated risks for black defendants and underestimated them for white defendants. Meanwhile, the developer company refused to disclose the source code, citing commercial secrecy, turning the "judicial prediction" into a black box [5]. This case demonstrates how AI, integrated into a punitive system, can legitimize and render systemic racism invisible, directly contradicting SDG 16 (Peace, Justice, and Strong Institutions). The danger lies not in replacing judges, but in bias, once put on an algorithmic conveyor belt, acquiring an aura of objectivity.
Artificial intelligence is heralded as a key tool for achieving the UN's global Sustainable Development Goals (SDGs). It already assists in diagnosing diseases, personalizing education, and combating climate change. But behind this potential lies a profound paradox: the same technology can both save lives and perpetuate injustice. While algorithms like those from Zebra Medical Vision improve access to medicine, in other domains AI exhibits alarming bias. Amazon's recruiting algorithm discriminated against women, and the US judicial COMPAS system systematically overestimated risks for black defendants. The problem is that AI does not create new inequality—it reveals, encodes, and scales the prejudices already embedded in historical data. This creates a new form of the digital divide—inequality before the algorithm, where discrimination becomes mass-produced, invisible, and pseudo-objective.
Overcoming this paradox requires not technological breakthroughs, but wise governance. The experience of Singapore, whose Model AI Governance Framework is founded on transparency and a human-centric approach, shows the way: the state must set ethical rules, engage the private sector and science in solving social challenges, and create a trustworthy environment for innovation. AI is a mirror reflecting society's values. Without an ethical framework, it will amplify all existing inequalities. But if principles of fairness and inclusivity are consciously embedded into its development, it could become humanity's most powerful ally in building a better future for all. The choice is ours.
References:
- Sustainable Development Goals [Electronic resource] // United Nations Office at Geneva. – URL: https://www.ungeneva.org/en/about/topics/sustainable-development-goals (date of access: 31.01.2026).
- Zebra Medical Vision: transforming patient care through AI [Electronic resource] // Harvard University. Digital, Data, and Design (D^3) Institute. – URL: https://d3.harvard.edu/platform-digit/submission/zebra-medical-vision-transforming-patient-care-through-ai/ (date of access: 31.01.2026).
- Goryan E.V. National approaches to the application of artificial intelligence: the experience of Singapore // Legal Studies. – 2020. – № 8. – P. 62–70
- AI discriminated against women [Electronic resource] // Inc. Russia. – 2018. – URL: https://incrussia.ru/news/ii-diskriminiroval-zhenshhin/ (date of access: 31.01.2026).
- The US judicial system used a racist program for years [Electronic resource] // Mir 24. – 2020. – URL: https://mir24.tv/news/16261623/sudebnaya-sistema-ssha-godami-ispolzovala-programmu-rasista (date of access: 31.01.2026).


Оставить комментарий