| · It is the world’s first comprehensive Artificial Intelligence law.
· It lays down rules and guidelines for specific risks associated with the use of AI in areas like biometric authentication, facial recognition, high-risk domains such as healthcare, and deep fakes.
· Taking a horizontal, risk-based approach that will apply across sectors of AI development, the EU AI Act classifies the technology into four categories: Prohibited, high-risk, limited-risk, and minimal-risk. |
· Systems that violate or threaten human rights through, for example, social scoring—creating “risk” profiles of people based on “desirable” or “undesirable” behaviour — or mass surveillance are banned outright.
· High-risk systems, which have a significant impact on people’s lives and rights, such as those used for biometric identification or in education, health, and law enforcement, will have to meet strict requirements, including human oversight and security and conformity assessment, before they can be put on the market.
· Systems involving user interaction, like chatbots and image-generation programmes, are classified as limited-risk and are required to inform usersthatthey are interacting with AI and allow them to opt out.
· The most widely used systems, which pose no or negligible risk, such as spam filters and smart appliances, are categorised as minimal-risk. They will be exempt from regulation, but will need to comply with existing laws. |
· The law will apply to any companies doing business in the European Union, and allows for penalties of up to 7% of global turnover or €35 million, whichever is higher, for those that don’t keep their use of AI under control.
· The act also enshrines the right of consumers to make complaints about the inappropriate use of AI by businesses, and to receive meaningful explanations for decisions taken by an AI that affect their rights. |