Regulating Artificial Intelligence – The Big Picture – RSTV IAS UPSC

  • IASbaba
  • February 3, 2020
  • 0
The Big Picture- RSTV, UPSC Articles

Regulating Artificial Intelligence

Archives

TOPIC: General Studies 2

  • Government policies and interventions for development in various sectors and issues arising out of their design and implementation.

In News: One of the most powerful men in IT, Sundar Pichai, has backed regulations for artificial intelligence. While Pichai isn’t the first big tech executive to say so publicly, his voice matters, given that Google is arguably the world’s largest AI Company. Tesla and SpaceX chief Elon Musk has been vocal about the need for regulating AI several times in the past. Musk even said that “by the time we are reactive in AI regulation, it’s too late”. Microsoft president Brad Smith is another prominent person in tech who has called for regulation of AI. 

Pichai, in an editorial, advocated for AI to be regulated keeping in mind both the harm and societal benefits that the technology can be used for. He also said that governments must be aligned on regulations around AI for “making global standards work”. 

While India has been vocal about the use of AI in various sectors, it is far from regulating it. A 2018 NITI Aayog paper proposed five areas where AI can be useful. In that paper, the think tank also noted the lack of regulation around AI as a major weakness for India. 

Despite all the established entry modes into the global market, AI is yet to have a guidepost, be regulated or even be legally understood.

Let’s take the example of Sophia: awarded citizenship under the laws governing citizens of Saudi Arabia, will she be permitted to drive from June 2018? Will she be allowed to purchase property? If she commits a crime, equal to the statement she said apparently by error i e, she wanted to destroy humankind, what punishment would be awarded?

AI is wholly based on data generated and gathered from various sources. Hence, a biased data set could evidently lead to a biased decision by the system or an incorrect response by a chatbot.

The point being, AI is growing mutli-fold and we still do not know all the advantages or pitfalls associated with it which is why it is of utmost importance to have a two-layered protection model: one, technological regulators; and two, laws to control AI actions as well as for accountability of errors.

Accountability for Errors

Let’s take the example of AI in the form of personalised chatbots. Chatbots are chat-based interfaces which pop up on websites with which customers can interact. These chatbots can either follow a scripted text or through machine learning (ML) and increased interaction deviate from the standard questions to provide a more human-like interaction. In the course of communicating with the chatbot, if a person were to divulge sensitive personal information for any reason whatsoever, what happens to this data?

So in the case of an ML chatbot which does not work as per a scripted text and has collected sensitive personal information, who is responsible if Rule 5(3) is breached? The most obvious answer would be the business unit/company because the rules in the 2011 Rules state that “The body corporate or any person who on behalf of the body corporate…” collects information. However, could the business possibly avoid liability by claiming that it was not aware that the chatbot, due to its AI ability of machine learning, had collected sensitive and personal information?

We do not have any clear provisions for advanced chatbots which do not work on a scripted text. With the lack of a clear provision in the law, accountability may take a hit. Additionally, what happens if an AI robot is given citizenship in India? Who is responsible for their actions? Or in case of autonomous car accidents, who is responsible for damage to property or harm caused or death of a person?

Reflects existing social biases and prejudice

Much recent research shows that applications based on machine-learning reflect existing social biases and prejudice. Such bias can occur if the data-set the algorithm is trained on is unrepresentative of the reality it seeks to represent. Bias can also occur if the data set itself reflects existing discriminatory or exclusionary practices. The impact of such data bias can be seriously damaging in India, particularly at a time of growing social fragmentation. It can contribute to the entrenchment of social bias and discriminatory practices, while rendering both invisible and pervasive the processes through which discrimination occurs.

Even if estimates of AI contribution to GDP are correct, the adoption of these technologies is likely to be in niches within the organised sector.  These industries are likely to be capital rather than labor intensive, and thus unlikely to contribute to large scale job creation.

Will replace low to medium skilled jobs

At the same time, AI applications can most readily replace low to medium skilled jobs within the organised sector. This is already being witnessed in the BPO sector – where basic call and chat tasks are now automated. Re-skilling will be important, but it is unlikely that those who lose their jobs will also be those who are being re-skilled – the long arch of technological change and societal adaptation is longer than that of people’s lives.

The Way Forward

With all the positive impact AI has to offer, it is of utmost importance for the Government of India to establish sound data policies to ensure that the benefits can be materialized by society. Achieving meaningful results will depend on India’s ability to create an environment that fosters the development of AI and builds trust and confidence in the technology. AI systems are only as strong as the quantity and quality of the data that is available to them for training; if data cannot be accessed and shared, then AI will suffer. This means that the government has a critical role to play in the future of India’s AI landscape.

Our laws need to be amended or new laws for AI technologies and processes will need to be adopted to fill up existing lacunae in the growing AI space. There is a need to form the basic guidelines which should be met on a national level for any AI activity – indigenous, foreign or even modifications to an open source AI. The guidelines would serve as the foundation for any amendments in the laws or brand new AI laws.

In addition to developing AI applications and creating a skilled workforce the government needs to prioritize research that examines the complex social, ethical and governance challenges associated with the spread of AI-driven technologies. Blind technological optimism might entrench rather than alleviate the grand Indian challenge of inequity and growth.

In fact the element of end-to-end ‘human involvement’ has been insisted upon by most AI advanced countries such as Canada, in order to ensure accountability and security of AI systems.

Connecting the Dots:

  1. Analyse the need and challenges in regulating Artificial Intelligence in India.

Search now.....

Sign Up To Receive Regular Updates