Topic : AI in ethics
Topic in Syllabus: General Studies Paper 4: Ethics
A technology should be evaluated both on the basis of its utility and the intention of its creator.
We can intuitively recognise whether an action is ethical or not.
A cigarette company wants to decide on launching a new product, whose primary feature is reduced tar. It plans to tell customers that the lower tar content is a ‘healthier’ option. This is only half true. In reality, a smoker may have to inhale more frequently from a cigarette with lower tar to get the flavour of a regular cigarette.
- the egoistic perspective states that we take actions that result in the greatest good for oneself. The cigarette company is likely to sell more cigarettes, assuming that the new product wins over more new customers. From an egoistic perspective, hence, the company should launch the new cigarette.
- the utilitarian perspective states that we take actions that result in the greatest good for all. Launching the new cigarette is good for the company. The new brand of cigarette also provides a ‘healthier’ choice for smokers. And more choice is good for customers. Hence, the company should launch the product.The egoistic and utilitarian perspectives together form the ‘teleological perspective’, where the focus is on the results that achieve the greatest good.
- the ‘deontological perspective’, on the other hand, focuses more on the intention of the maker than the results. The company deceives the customer when it says that the new cigarette is ‘healthier’. Knowingly endangering the health of humans is not an ethical intention. So, the company should not launch this cigarette.
The flawed facial recognition system:
In the context of Artificial Intelligence (AI), most commercially available AI systems are optimised using the teleological perspectives and not the deontological perspective.
Let us analyse a facial recognition system, a showcase for AI’s success.
- An AI system introduced in 2015 with much fanfare in the U.S. failed to recognise faces of African Americans with the same accuracy as those of Caucasian Americans.
- Google, the creator of this AI system, quickly took remedial action. However, from a teleological perspective, this flawed AI system gets a go ahead.
- According to the 2010 census, Caucasian Americans constitute 72.4% of the country’s population. So an AI system that identifies Caucasian American faces better is useful for a majority of Internet users in the U.S., and to Google.
- From a deontological perspective, the system should have been rejected as its intention probably was not to identify people from all races, which would have been the most ethical aim to have.
- Social media is not the only context where AI facial recognition systems are used today. These systems are increasingly being used for law enforcement. Imagine the implications of being labelled a threat to public safety just because limited data based on one’s skin colour was used to train the AI system
Ethical basis of AI:
The ethical basis of AI, for the most part, rests outside the algorithm
The bias is in the data used to train the algorithm. It stems from our own flawed historical and cultural perspectives — sometimes unconscious — that contaminate the data.
It is also in the way we frame the social and economic problems that the AI algorithm tries to solve.
An ethical basis resting on both teleological and deontological perspectives gives us more faith in a system. Sometimes, even an inclusive intention may need careful scrutiny.
Polaroid’s ID-2 camera, introduced in the 1960s, provided quality photographs of people with darker skin. However, later, reports emerged that the company developed this for use in dompas, an identification document black South Africans were forced to carry during apartheid.
Understanding and discussing the ethical basis of AI is important for India. Reports suggest that the NITI Aayog is ready with a ₹7,500 crore plan to invest in building a national capability and infrastructure.
The transformative capability of AI in India is huge, and must be rooted in an egalitarian ethical basis.
Any institutional framework for AI should have a multidisciplinary and multi-stakeholder approach, and have an explicit focus on the ethical basis.
Is Artificial Intelligence dangerous to humanity? How can we ensure that machines behave ethically and that they are used ethically? (250 words)