Ethics
Context: UNESCO’s global agreement on the ethics of AI can guide governments and companies alike
- Artificial intelligence (AI) is more present in our lives than ever.
Issues in AI
- The data used to feed into AI often aren’t representative of the diversity of our societies, producing outcomes that can be said to be biased or discriminatory.
- For instance, while India and China together constitute approximately a third of the world’s population, Google Brain estimated that they form just 3% of images used in ImageNet, a widely used dataset.
- There are problems emerging in facial recognition technologies, which are used to access our phones, bank accounts and apartments, and are increasingly employed by law-enforcement authorities, in identifying women and darker-skinned people.
- For three such programs released by major technology companies, the error rate was 1% for light-skinned men, but 19% for dark-skinned men, and up to 35% for dark-skinned women.
- These issues are of particular importance to India, which is one of the world’s largest markets for AI-related technologies, valued at over $7.8 billion in 2021.
- To ensure that the full potential of these technologies is reached, the right incentives for ethical AI governance need to be established in national and sub-national policy.
A common rulebook
- Until recently, there was no common global strategy to take forward this importance agenda.
- This changed when 193 countries reached a groundbreaking agreement at UNESCO on how AI should be designed and used by governments and tech companies.
- It aims to fundamentally shift the balance of power between people, and the businesses and governments developing AI
- Countries which are members of UNESCO have agreed to implement this recommendation by enacting actions to regulate the entire AI system life cycle, ranging from research, design and development to deployment and use
Recommendations
- It underscores the importance of the proper management of data, privacy and access to information.
- It also calls on member states to ensure that appropriate safeguards schemes are devised for the processing of sensitive data and effective accountability, and redress mechanisms are provided in the event of harm.
- Recommendation taking a strong stance that
- AI systems should not be used for social scoring or mass surveillance purposes;
- that particular attention must be paid to the psychological and cognitive impact that these systems can have on children and young people;
- and that member states should invest in and promote not only digital, media and information literacy skills, but also socio-emotional and AI ethics skills to strengthen critical thinking and competencies in the digital era.
Significance
- The new agreement is broad and ambitious.
- It is a recognition that AI-related technologies cannot continue to operate without a common rulebook.
- Governments will themselves use the recommendation as a framework to establish and update legislation, regulatory frameworks, and policy to embed humanistic principles in enforceable accountability mechanisms.
Source: The Hindu