Science and Technology
Syllabus
- GS-3: Science and Technology- developments and their applications and effects in everyday life.
- GS-3: Awareness in the fields of IT, Space, Computers, robotics
- GS-2: Bilateral, regional and global groupings and agreements involving India and/or affecting India’s interests.
Context: The Cyberspace Administration of Chins is rolling out new regulations, to be effective from January 10, to restrict the use of deep synthesis technology and curb disinformation.
- Deep synthesis is defined as the use of technologies, including deep learning and augmented reality, to generate text, images, audio and video to create virtual scenes.
- One of the most notorious applications of the technology is deepfakes, where synthetic media is used to swap the face or voice of one person for another.
- Deepfakes are getting harder to detect with the advancement of technology. It is used to generate celebrity porn videos, produce fake news, and commit financial fraud among other wrongdoings
- Deepfakes are a compilation of artificial images and audio put together with machine-learning algorithms to spread misinformation and replace a real person’s appearance, voice, or both with similar artificial likenesses or voices.
- It can create people who do not exist and it can fake real people saying and doing things they did not say or do.
- The term deepfake originated in 2017, when an anonymous Reddit user called himself “Deepfakes.” This user manipulated Google’s open-source, deep-learning technology to create and post pornographic videos.
- The videos were doctored with a technique known as face-swapping.
- The user “Deepfakes” replaced real faces with celebrity faces.
- Deepfake technology is now being used for nefarious purposes like
- Scams and hoaxes
- Celebrity pornography
- Election manipulation
- Social engineering
- Automated disinformation attacks
- Identity theft and financial fraud
- It has become one of the modern frauds of cyberspace, along with fake news, spam/phishing attacks, social engineering fraud, catfishing and academic fraud.
- Deepfake technology has been used to impersonate notable personalities like former U.S. Presidents Barack Obama and Donald Trump, India’s Prime Minister Narendra Modi, Facebook chief Mark Zuckerberg and Hollywood celebrity Tom Cruise, among others.
AI-Generated Synthetic Media, aka Deepfakes, advances have clear benefits in certain areas, such as accessibility, education, film production, criminal forensics, and artistic expression.
Accessibility:
- Deepfake can accelerate the accessibility quest to improve equity. Microsoft’s Seeing.ai and Google’s Lookout leverage AI for recognition and synthetic voice to narrate objects, people, and the world.
- AI-Generated synthetic media can power personalized assistive navigation apps for pedestrian travel.
- Technology companies are working to enable and develop AI-Generated synthetic media scenarios for people living with ALS (Lou Gehrig’s Disease).
- Synthetic voice is also essential to enable such patients to be independent. Deepfake voice can also help with speech impediments since birth.
Education
- AI-Generated synthetic media can bring historical figures back to life for a more engaging and interactive classroom. This will have more impact, engagement, and will be a better learning tool.
- For example, JFK’s resolution to end the cold was speech, which was never delivered, was recreated using synthetic voice with his voice and speaking style will clearly get students to learn about the issue in a creative way.
- Synthetic human anatomy, sophisticated industrial machinery, and complex industrial projects can be modeled and simulated in a mixed reality world to teach students.
Arts
- AI-Generated synthetic media can bring unprecedented opportunities in the entertainment business that currently use high-end CGI, VFX, and SFX technologies to create artificial but believable worlds for compelling storytelling.
- Samsung’s AI lab in Moscow brought Mona Lisa to life by using Deepfake technology.
- In the video gaming industry, AI-generated graphics and imagery can accelerate the speed of game creation. Nvidia demoed a hybrid gaming environment created by deepfakes and is working on bringing it to market soon.
Autonomy & Expression
- Synthetic media can help human rights activists and journalists to remain anonymous in dictatorial and oppressive regimes. Deepfake can be used to anonymize voice and faces to protect their privacy.
- Deep Empathy, a UNICEF and MIT project, utilizes deep learning to learn the characteristics of Syrian neighborhoods affected by conflict. It then simulates how cities around the world would look amid a similar conflict.
- Deep Empathy project created synthetic war-torn images of Boston, London and other key cities around the world to help increase empathy for victims of a disaster region
- With the improvement in technology, deep fakes are also getting better. Initially, an individual with advanced knowledge of machine learning and access to the victim’s publicly-available social media profile could only make deep fakes.
- However, the easy availability of apps/software has made it possible even for a person with basic computer knowledge to create such fakes. Development of apps and websites capable of such editing became more frequent and easily accessible to an average user.
- In other words, access to commodity cloud computing, algorithms, and abundant data has created a perfect storm to democratise media creation and manipulation.
- Deepfakes is now being used as a tool to spread computational propaganda and disinformation at scale and with speed.
- Disinformation and hoaxes have evolved from mere annoyance to high stake warfare for creating social discord, increasing polarization, and in some cases, influencing an election outcome.
Such technologies can give people a voice, purpose, and ability to make an impact at scale and with speed. But as with any new innovative technology, it can be weaponised to inflict harm.
- Damage to Personal Reputation: Deepfake can depict a person indulging in antisocial behaviours and saying vile things. These can have severe implications on their reputation, sabotaging their professional and personal life.
- It can be used to create fake pornographic videos and to make politicians appear to say things they did not, so the potential for damage to individuals, organisations and societies is vast.
- Targeting Women: The malicious use of a deepfake can be seen in pornography, inflicting emotional, reputational, and in some cases, violence towards the individual
- Issue of Fait Accompli: Even if the victim could debunk the deep fake, it may come too late to remedy the initial harm.
- Blackmailing Tool: Further, Deepfakes can be deployed to extract money, confidential information, or exact favours from individuals.
- Destabilise Society: Deepfakes can become a very effective tool to sow the seeds of polarisation, amplifying division in society, and suppressing dissent.
- Public Warfare: A deepfake could act as a powerful tool by a nation-state to undermine public safety and create uncertainty and chaos in the target country. Nation-state actors with geopolitical aspirations, ideological believers, violent extremists, and economically motivated enterprises can manipulate media narratives using deepfakes.
- Anti-state sentiment: Nation-state actors with geopolitical aspirations,
ideological believers, violent extremists, and economically motivated
enterprises can manipulate media narratives using deepfakes. It can be used
by insurgent groups and terrorist organisations, to represent their adversaries
as making inflammatory speeches or engaging in provocative actions to stir up
anti-state sentiments among people. - Undermining democracy: A deep fake can also aid in altering the democratic discourse and undermine trust in institutions and impair diplomacy. False information about institutions, public policy, and politicians powered by a deepfake can be exploited to spin the story and manipulate belief.
- A deep fake of a political candidate can sabotage their image and reputation.
- Leaders can also use them to increase populism and consolidate power.
- Liar’s Dividend: It is a situation where an undesirable truth is dismissed as deep fake or fake news. It can also help public figures hide their immoral acts in the veil of deepfakes and fake news, calling their actual harmful actions false.
- Creation of Echo Chambers in Social Media: Falsity is profitable, and goes viral
more than the truth on social platforms. Combined with distrust, the existing
biases and political disagreement can help create echo chambers and filter
bubbles, creating discord in society. - Disrupt the right to privacy: Given the increasing affordability of smartphone & internet, substantive population of any given country have their digital presence in one or more social media platforms. The content shared over these platforms can be misused for creating Deepfakes which is clearing a violation of right to Privacy.
- Regulation & Collaboration with Civil Society: Meaningful regulations with a collaborative discussion with the technology industry, civil society, and policymakers can facilitate disincentivising the creation and distribution of malicious deepfakes.
- Detect and amplify: We also need easy-to-use and accessible technology solutions to detect deepfakes, authenticate media, and amplify authoritative sources.
- New Technologies: There is also need easy-to-use and accessible technology solutions to detect deepfakes, authenticate media, and amplify authoritative sources.
- Enhancing Media Literacy: Media literacy for consumers and journalists is the most effective tool to combat disinformation and deep fakes. Improving media literacy is a precursor to addressing the challenges presented by deepfakes.
- To counter the menace of deepfakes, we all must take the responsibility to be a critical consumer of media on the Internet, think and pause before we share on social media, and be part of the solution to this infodemic
- The policy requires deep synthesis service providers and users to ensure that any doctored content using the technology is explicitly labelled and can be traced back to its source.
- The regulation also mandates people using the technology to edit someone’s image or voice, to notify and take the consent of the person in question.
- When reposting news made by the technology, the source can only be from the government-approved list of news outlets.
- Deep synthesis service providers must also abide by local laws, respect ethics, and maintain the “correct political direction and correct public opinion orientation”.
- China’s cyberspace watchdog said it was concerned that unchecked development and use of deep synthesis could lead to its use in criminal activities like online scams or defamation.
- The country’s recent move aims to curb risks that might arise from activities provided by platforms which use deep learning or virtual reality to alter any online content.
- If successful, China’s new policies could set an example and lay down a policy framework that other nations can follow.
- The European Union has an updated Code of Practice to stop the spread of disinformation through deepfakes.
- The revised Code requires tech companies including Google, Meta, and Twitter to take measures in countering deepfakes and fake accounts on their platforms.
- They have six months to implement their measures once they have signed up to the Code. If found non-compliant, these companies can face fines as much as 6% of their annual global turnover.
- The Code of Practice was signed in October 2018 by online platforms Facebook, Google, Twitter and Mozilla, as well as by advertisers and other players in the advertising industry. Microsoft joined in May 2019, while TikTok signed the Code in June 2020.
- However, the assessment of the Code revealed important gaps and hence the Commission has issued a Guidance on updating and strengthening the Code in order to bridge the gaps.
- Introduced in 2018, the Code of Practice on Disinformation brought together for the first time worldwide industry players to commit to counter disinformation.
- In July 2021, the U.S. introduced the bipartisan Deepfake Task Force Act to assist the Department of Homeland Security (DHS) to counter deepfake technology. The measure directs the DHS to conduct an annual study of deepfakes — assess the technology used, track its uses by foreign and domestic entities, and come up with available countermeasures to tackle the same
- Some States in the United States such as California and Texas have passed laws that criminalise the publishing and distributing of deepfake videos that intend to influence the outcome of an election.
- In India, however, there are no legal rules against using deepfake technology. However, specific laws can be addressed for misusing the tech, which include Copyright Violation, Defamation and cyber felonies.
- There are many related multilateral initiatives like the Paris Call for Trust and Security in Cyberspace, NATO Cooperative Cyber Defence Centre of Excellence and the Global Partnership on Artificial Intelligence. These forums are used by governments to coordinate with global and domestic actors to create deepfake policy in different areas.
Main Practice Question: Technology can give people a voice, purpose, and an ability to make impact at scale and with speed. Analyse the statement in the context of Deep Fake technology.
Note: Write answer his question in the comment section.