
Artificial Intelligence is one of the most emerging technologies in this 21st century, evolving new technologies and innovation is reshaping and transforming our modern society, it is also playing a significant role to enhance functionality in almost all the sectors, its most remarkable ability lies in learning from vast amount of data and offer predictive and solution oriented insights (Russell & Norvig, 2021; Marr, 2020). But as AI becomes more embedded in daily life, a critical question arises: can we trust unbiased and fair suggestions when the data they learn from is riddled with human prejudice and biases? Biases and prejudice not only lead to discrimination but also create social inequality in society, which is a menace to the development of a nation. AI promises of efficiency and objectivity are drastically being challenged by concerns over algorithmic discrimination, lack of transparency and violation of fundamental human rights (Eubanks, 2018). There is a book titled Automating Inequality: How High-Tech Tools Profile, Police, and Punish the Poor. written by Varginia Eubanks discussed that how AI and data driven systems disproportionately affect the poor in designing welfare policy, citizen Identification by police departments, and housing policy, apart from that As Kate Crawford (2021) argues in Atlas of AI, artificial intelligence is not an abstract or neutral technology. It is deeply entangled with systems of labor, ecology, and power. Far from correcting human errors, AI often entrenches and automates them—disproportionately impacting the rights of those already marginalized. When AI learns from biased historical data it can reproduce and even magnify those same injustices.
AI systems, though often perceived as neutral, can inherit and amplify human biases in three major ways. Algorithmic bias, which stems from design choices, occurs when developers unconsciously embed their own assumptions into how the system makes decisions. For example, resume screening algorithms have been found to favor male candidates over equally qualified women, reflecting historical hiring data skewed by gender bias. Data bias emerges when the datasets used to train AI do not adequately represent marginalized communities. A notable case is facial recognition technology, which has shown significantly higher error rates for women and people of color because these groups were underrepresented in the training data (Buolamwini & Gebru, 2018). Lastly, deployment bias refers to how and where AI tools are implemented. Predictive policing algorithms, when deployed in low-income or minority neighborhoods already under heavy surveillance, can unfairly target these communities and reinforce cycles of criminalization (O’Neil, 2016). These biases are not just technical flaws—they pose real threats to social justice and human rights.
Human Rights at Risk
As we all know, the artificial Intelligence system is extensively shaping our decision making in the contemporary milieu. Many fundamental human rights are being compromised, often unnoticed, this is the violation of right to equality and non discrimination (Article 7 UDHR) if algorithms produce outcomes that disproportionality disadvantage certain groups for instance, In 2018 Amazon company have used AI powered hiring platform to eliminate women applicants for technical roles and disparaged resumes that included the word “Women” or associated with minority background due to historical data patterns (Dastin , J. 2018), it leads to exclusion in society and reinforced inequality, consequently, it also violates and undermine the right to privacy article 12 of Universal Declaration of Human Rights by using AI surveillance technologies to monitoring data without having individual consent, apart from that, right to a fair trial also at risk while using advanced artificial intelligence technologies in current judicial system, such as risk assessment algorithms that recommend bail or sentencing, many a times these system shows results on the basis of wrong and opaque observations and numerics pattern with unchallengeable logic, unequivocally it can lead to unfair outcomes especially for marginalized defendants,
India and Contemporary World
Most of the western countries are on track to restrain AI in a way that respects human rights, for instance European union has tabled a EU Artificial Intelligence Act by European Commission in April 2021 to regulate AI system and its algorithm based on their risk to fundamental rights, safety and secure democratic values, this is the first and foremost landmark comprehensive legal framework on AI by major global bloc, (European Commission, 2021), apart from that, UNESCO’s has recommended on the ethics of Artificial Intelligence (2021), call for fairness, accountability and inclusivity in Al development, furthermore, United State of America released its blueprint for an AI bill of rights in 2022, focused on privacy, algorithm fairness and human control over automated system. India however, lacking behind of designing comprehensive legal framework for governing Artificial Intelligence, although the country has seen expeditious adoption of new artificial technologies such as welfare delivery, policing, and education, this has raised the concerns around digital exclusion, opaque decision making and surveillance creep, (NITI Aayog, 2018). For instance facial recognition systems are already in use by Indian law enforcement agencies often without clear legal authorization or oversight mechanism. In the absence of robust data protection law further exacerbates risk to individual privacy and consent, there are still significant loopholes in protecting users rights and restricting state surveillance even with the implementation of digital personal data protection act 2023. As India’s position itself as global leader in Al Innovation, it must also lead in enacting safeguards that uphold democratic values and human rights.
Way Forward
Artificial Intelligence must be developed and deployed through a human rights based framework that prioritises equality, dignity and accountability, to begin with mandatory AI impact assessments should be institutionalised – evaluating potential harms before deployment especially in sensitive sector like policing, healthcare and welfare, these assessments have to be transparent and reviewed by independent oversight bodies to ensure public trust. Equally vital is the need to address bias at the source, AI system must be trained on diverse and representative datasets, reflecting the lived realities of women minorities and historically marginalized communities, Legal Framework needs to be evolved, India must move forward towards robust regulatory framework that will ensure data protection, algorithmic transparency and recourse for affected individuals, the establishment of Independent AI ethics commission, with enforcement powers. Finally , there has to be more democratic involvement in AI governance, people ought to have a say in the creation and application of technologies that affect their daily lives. The media, academia and civil society all play a vital role in promoting public dialogue and keeping those in positions of authority accountable.
Subscribe to Our Newsletter
Get the latest CounterCurrents updates delivered straight to your inbox.
Mohd Kamil is a researcher in human rights and public policy, affiliated to the Global Counter Terrorism Council as research coordinator He writes on technology, ethics, and social justice.
References
- Buolamwini, J., & Gebru, T. (2018). Gender Shades: Intersectional Accuracy Disparities in Commercial Gender Classification. Proceedings of Machine Learning Research, 81, 1–15. https://proceedings.mlr.press/v81/buolamwini18a.html
- Crawford, K. (2021). Atlas of AI: Power, Politics, and the Planetary Costs of Artificial Intelligence. Yale University Press.
- Dastin, J. (2018, October 10). Amazon scraps secret AI recruiting tool that showed bias against women. Reuters. https://www.reuters.com/article/us-amazon-com-jobs-automation-insight-idUSKCN1MK08G
- European Commission. (2021). Proposal for a Regulation laying down harmonised rules on artificial intelligence (Artificial Intelligence Act). https://eur-lex.europa.eu/legal-content/EN/TXT/?uri=CELEX%3A52021PC0206
- Eubanks, V. (2018). Automating Inequality: How High-Tech Tools Profile, Police, and Punish the Poor. St. Martin’s Press.
- Marr, B. (2020). Artificial Intelligence in Practice: How 50 Companies are Already Using AI to Revolutionize Their Business. Wiley.
- NITI Aayog. (2018). National Strategy for Artificial Intelligence. https://niti.gov.in/national-strategy-artificial-intelligence
- O’Neil, C. (2016). Weapons of Math Destruction: How Big Data Increases Inequality and Threatens Democracy. Crown Publishing Group.
- Russell, S., & Norvig, P. (2021). Artificial Intelligence: A Modern Approach (4th ed.). Pearson.
- UNESCO. (2021). Recommendation on the Ethics of Artificial Intelligence. https://unesdoc.unesco.org/ark:/48223/pf0000380455
- White House Office of Science and Technology Policy. (2022). Blueprint for an AI Bill of Rights. https://www.whitehouse.gov/ostp/ai-bill-of-rights/