How intelligent is artificial intelligence: Exploring the exclusion of minority

 artificial intelligence

Artificial Intelligence (AI) is increasingly being touted as our future. AI technology has not only invaded our private lives in the form of smart technologies, virtual assistants, and robotic/ humanoid companions; but is also finding ever increasing representation in public sphere and government policies. From health technology to agriculture to the norms being developed on women safety (such as AI enabled cameras to track expressions in Uttar Pradesh) – the reliance of AI is predominant.

While the technology certainly has its advantages, caution needs to be exercised as the over-reliance on AI often comes at the price of exclusion of certain groups from the mainstream. Centuries’ old bias against groups such as women, people of colour find their way into AI technology and increase manifold. The bias is visible in two forms primarily. One, when the AI is the object, i.e., the AI technology is used to perform certain functions; the resulting outcome displays certain inherent predisposition against a select group. Second, when the AI acts a subject, i.e., how AI or AGI is represented and imagined – particularly in their physical manifestations through utilizing robotics technology or representation in popular culture – the depiction is often highly gendered/ racial/ casteist.

Artificial intelligence as an object: Incidents of bias in AI output

Probable job losses due to AI have become a global concern. However, most of these jobs would be those belonging to the minority sections and women. A study by the World Economic Forum states that women account for approximately 57% of the jobs that Artificial Intelligence is most likely to replace, and these women would be left with far fewer transition and reskilling opportunities vis-à-vis their male counterparts.

With the omnipresence of AI various biases also find their way into AI understanding and create a lopsided situation against certain sections of the population. These biases are in most occasions not actively fed into the AI tool, but are so deep-rooted in history that the data collated by the AI understands them as a normative framework within which it has to operate.

An oft-quoted example of AI bias is that of Amazon, which developed an AI tool (now discontinued) to help speed up the process of hiring employees. The AI would gather the employee data available at Amazon to recognize the kind of people that were usually hired by the company and found that most people working at Amazon were men. This resulted in the software automatically rejecting resumes of female applicants particularly those who have studied at all women’s colleges. While the software creators had no intent to reject female candidates; the existing data set and the age-old gender divide that was already present in the workforce manifested itself into an unfortunate outcome.

Apple’s Siri on being called a sl*t or a bi*ch would respond with ‘I’d blush with I could’. While Siri has now been programmed to refuse any response to the comment; the reply went on to become the title of a UNESCO report where the UN body pointed out that such subservient and deflective responses from AI assistants which were meant to please the user were creating an atmosphere where the idea of women as being ‘servile, obedient and unfailingly polite’ was reinforced.

Microsoft’s female chat-box Tay was tricked by users to post racist hate-speech on Twitter within twenty-four hours of launch and had to be recalled. Researches show that AI software used for facial recognition is error-prone when recognizing people of colour- particularly women of colour; at times not detecting the faces at all. These are also best suited to recognize the faces of white men.

Gender biases are also witnessed when AI is posed with word associations. Word embedding in AI happens through the already available data that the AI gathers. Our expressions, verbal and written, towards certain jobs seep into the embedding models utilized by AI – in turn making them consider some jobs more masculine and some others feminine. While words such as profession, mathematics and science are correlated to men, women get linked to family, and humanities.

A research conducted on “correctional offender management profiling for alternative sanctions” (COMPAS) – an AI-driven software utilized in some parts of USA to predict whether a person convicted for certain crime would commit a crime again concluded that even though ethnicity or race is not used as an input by COMPAS, the results are often biased against the African American population.

AI learning is not merely a passing issue, and as more information is collated by AI tools, the biases are only projected to increase. For example, in a research conducted by King’s College London, it was found that women in leadership positions received more negative comments on Twitter as compared to their male counterparts. The research also found that a Google image search for the terms ‘President’ or ‘Prime Minister’ showed approximately 95% male faces. The researchers assert that based on said data AI tools shall learn that women are less liked in leadership positions and important leadership positions are held by men. They further add that this learning shall remain within the AI tool’s understanding and with our increasing dependence on AI tools shall be projected onto us, thus creating an unbalanced situation for women in public life. This signifies that unless steps are taken to curb the AI tools from learning discriminatory behavior, it shall have immensely problematic consequences.

Since AI functions on acquired learning or input from human resources, decisions reached through AI have a high probability of having a discriminatory outcome. Intent is not a requirement to constitute indirect discrimination through AI. There are multiple causes for indirect discrimination in/ through artificial intelligence. First, people in the IT/ STEM fields are predominantly white men. World Economic Forum’s (WEF) global gender gap report states that 78% of AI professionals are male. This magnifies the chances of indirect bias being fed into the final product. Second, discrimination in AI occurs mostly through stereotyping – in particular the gendered stereotype. Gender stereotyping is presenting a generalized preconception about features possessed or the roles designated to men and women. Gender stereotyping begins with a generalized view or preconceptions, which take the form of assumptions about the characteristics, attributes, role leading to inferences about men and women. These stereotypes are based on physical, biological, emotional and cognitive attributes; sexual characteristics/ behavior; societal roles understood in common parlance based on gendered traits. Stereotypes predominant in the societal understanding such as those based on gender or race; inadvertently find their way into AI tools.

The problem of stereotyping can be extended to caste-based discrimination in India. Considering the Amazon example where the AI tool ignored taking into account women applicants for open positions, it is probable that a similar tool when used in India could have ignored people from certain castes. State of working India study conducted by the Azim Premji University in 2018 pointed out that while persons from Scheduled tribes and Scheduled Castes were predominantly employed in low paying jobs, their numbers in high paying occupations (except public administration) has remained fairly low. The study further states that on an average a person from the scheduled caste earns only 56% of what a person from the caste population earns; while the figure is 55% for scheduled tribes and 72% for other backward classes. Taking the figures into account an AI tool designed for shortlisting for jobs could ignore these groups, since according to the data it shall have collated, their previous representation shall have remained low. Further, an AI tool designed to assess pay-scale could place this section in a lower pay-slab vis-à-vis their caste counterparts. Even in cases where attention is paid to not mentioning the caste of a person anywhere in the applications, the AI could possibly make assumptions based on surnames. Another example that can be cited here is that of Shadi.com, a matrimonial website, which according to a report by the Sunday Times, did not offer people belonging to the scheduled castes as a potential life-partner for profiles set up for members of the Brahmin community, unless they expanded their preference to incorporate all castes.

Artificial intelligence as a subject: Analyzing representational bias

Since AI technology is still in its infancy, not much real-life data is available where AI functions as a subject. However, AI-based humanoids and AI representation in popular culture could provide certain insights into how our understanding of gender and colour percolates into AI representation. Female humanoid robots like Sophia by Hanson Robotics and Eliza by Hiroshi Ishiguro are aesthetically appealing and are often objectified. They reinforce the conventional female archetype through their design, structure, and mannerisms.

Female stereotypes manifest themselves in AI understanding in various forms. First, the role of female AI is centered on performing household chores and home-making activities. Vici from Small Wonder, Karishma from Karishma ka Karishma (Karishma’s Magic), Irona from Richie Rich are female humanoids who despite possessing advanced technologies, are primarily used to help the home-maker with household chores.

Second, the female AI is depicted as performing the societal archetype of feminine jobs – jobs that require nurturing capacity, companionship, or meticulousness without much application-based understanding. Samantha from Her is a female virtual assistant who assists the protagonist in navigating through his work, cope with his divorce and provides companionship; Nila from 2.0 is a female humanoid created to be a helper, caretaker and an assistant (to avoid confusion, Nila is an acronym for ‘nice, intelligent, lovely assistant’). J.A.R.V.I.S., from the Marvel cinematic universe, is one of the rare representations of a male AI assistant.

Third, the depictions of female cyborgs have more often than not been that of a seductive or destructive entity or both. Works like Ex Machina, Westworld and I am Mother are trying to break this mould by giving us female cyborgs that are intelligent and a change from the abused objects or villainous bodies who die in the end to more likeable beings who survive. However, none are free from the criticism which most other depictions of female humanoids garner, namely robots as objects of physical attraction or adopting a more nurturing and feminine role in the end.

Fourth, even where creators attempt to break the stereotypical moulds, gender stereotypes manifest themselves through body-language and reactions. An example is available in the form of Doraemon, a male robot cat and his sister Dorami- from the Japanese manga and anime series Doraemon. While the series tries to demonstrate gender-versatility, Doraemon is messy, has blue skin and Dorami has long eyelashes, wears a bow on her head, and is organized and meticulous.

Fifth, gender identity is not necessarily portrayed in the popular imagination through the apparent physical construct. Considering the film WALL-E, which traces the journey of two robots WALL-E and EVA, the physical construct of both the robots does not have a gender assigned to it. However, the film anthropomorphizes them through gender stereotypes to assign Wall-E, a male construct, and Eva, a female construct. This is done by utilizing various tools such as different physical appearance- with a more rugged appearance for Wall-E as compared to Eve’s petite aesthetics; mannerism – Wall-E having a more childlike quality to him while Eve is more mature and motherly; through voice portrayal; and job assignment- with Eve although holding a superior job and being better qualified theoretically, requires Wall-E to be her savior throughout the film.

Conclusion

It can be ascertained that artificial intelligence, knowingly or unknowingly, becomes susceptible to bias, most prominently in the context of marginalized groups. While sectional examples of inherent bias in AI have been discussed, the presence or absence of these parameters could have different impacts on the outcome achieved by AI. For example, a woman of colour coming from marginalized section could face more issues when compared to a white woman. Thus, when designing AI, it is important to consider the wide-ranging impact it would have on societal perceptions.

First, to minimize the concerns that AI poses, certain mechanisms such as increasing the diversity in STEM fields can be used. Lived experiences are required to reduce bias in AI. Second, AI algorithms could be commanded to recognize and learn feminist language and critical race language. Third, intensified efforts are recommended to develop legal directives that may help bridging the legal lacunae in AI governance.

Tanaya Thakur is Doctoral Student at Faculty of Legal Studies, South Asian University, New Delhi.


GET COUNTERCURRENTS DAILY NEWSLETTER STRAIGHT TO YOUR INBOX


 

Support Countercurrents

Countercurrents is answerable only to our readers. Support honest journalism because we have no PLANET B.
Become a Patron at Patreon

Join Our Newsletter

GET COUNTERCURRENTS DAILY NEWSLETTER STRAIGHT TO YOUR INBOX

Join our WhatsApp and Telegram Channels

Get CounterCurrents updates on our WhatsApp and Telegram Channels

Related Posts

Join Our Newsletter


Annual Subscription

Join Countercurrents Annual Fund Raising Campaign and help us

Latest News