APP Users: If unable to download, please re-install our APP.
Only logged in User can create notes
Only logged in User can create notes

General Studies 2 >> International Relations

audio may take few seconds to load

EU'S ARTIFICIAL INTELLIGENCE ACT

EU'S ARTIFICIAL INTELLIGENCE ACT

 

1. Context

After intense last-minute negotiations in the past few weeks on how to bring general-purpose artificial intelligence systems (GPAIS) like OpenAI's ChatGPT under the ambit of regulation, members of the European Parliament reached a Preliminary deal this week on a new draft of the European Union's ambitious Artificial Intelligence Act, first drafted two years ago.

2. What is the EU AI Act?

  • The AI Act is a proposed European law on artificial intelligence (AI) – the first law on AI by a major regulator anywhere.
  • The law assigns applications of AI to three risk categories. First, applications and systems that create an unacceptable risk, such as government-run social scoring of the type used in China, are banned. Second, high-risk applications, such as a CV-scanning tool that ranks job applicants, are subject to specific legal requirements.
  • Lastly, applications not explicitly banned or listed as high-risk are largely left unregulated

3. Why should we regulate Artificial Intelligence?

  • As artificial intelligence technologies become omnipresent and their algorithms more advanced capable of performing a wide variety of tasks including voice assistance, recommending music, driving cars, detecting cancer, and even deciding whether you get shortlisted for a job the risks and uncertainties associated with them have also ballooned.
  • Many AI tools are essentially black boxes, meaning even those who designed them cannot explain what goes on inside them to generate a particular output.
  • Complex and unexplainable AI tools have already manifested in wrongful arrests due to AI-enabled facial recognition; discrimination and societal biases seeping into AI outputs; and most recently, in how chatbots based on large language models (LLMs) like Generative Pretrained Transformer3 (GPT3) and 4 can generate versatile, human competitive and genuine looking content, which may be inaccurate or copyrighted material.

4. How was the AI Act formed?

  • The legislation was drafted in 2021 to bring transparency, trust, and accountability to AI and create a framework to mitigate risks to the safety, health, fundamental rights, and democratic values of the EU.
  • It also aims to address ethical questions and implementation challenges in various sectors ranging from healthcare and education to finance and energy.
  • The legislation seeks to strike a balance between promoting “the uptake of AI while mitigating or preventing harms associated with certain uses of the technology”. 
  • Similar to how the EU’s 2018 General Data Protection Regulation (GDPR) made it an industry leader in the global data protection regime, the AI law aims to “strengthen Europe’s position as a global hub of excellence in AI from the lab to the market” and ensure that AI in Europe respects the 27­country bloc’s values and rules.

5. What does the draft document entail?

  • The draft of the AI Act broadly defines AI as “software that is developed with one or more of the techniques that can, for a given set of human ­defined objectives, generate outputs such as content, predictions, recommendations, or decisions influencing the environments they interact with”.
  • It identifies AI tools based on machine learning, deep learning, and knowledge as well as logic­based and statistical approaches.
  • The Act’s central approach is the classification of AI tech based on the level of risk they pose to the “health and safety or fundamental rights” of a person.
  • There are four risk categories in the Act unacceptable, high, limited, and minimal.  
  • The Act prohibits using technologies in the unacceptable risk category with little exception.
  • These include the use of real­time facial and biometric identification systems in public spaces; systems of social scoring of citizens by governments leading to “unjustified and disproportionate detrimental treatment”; subliminal techniques to distort a person’s behavior; and technologies that can exploit vulnerabilities of the young or elderly, or persons with disabilities. 
  • The Act lays substantial focus on AI in the high­risk category, prescribing several pre­and post­market requirements for developers and users of such systems.
  • Some systems that fall under this category include biometric identification and categorization of natural persons.
  • AI is used in healthcare, education, employment (recruitment), law enforcement, justice delivery systems, and tools that provide access to essential private and public services (including access to financial services such as loan approval systems).
  • The Act envisages establishing a EU­ wide database of high­risk AI systems and setting parameters so that future technologies or those under development can be included if they meet the high ­risk criteria.

6. What is the recent proposal on general-purpose AI like ChatGPT? 

  • As recently as February this year, general-purpose AI such as the language model­ based ChatGPT, used for a plethora of tasks from summarising concepts on the internet to serving up poems, news reports, and even a Colombian court judgment did not feature in the EU lawmakers’ Plans for regulating AI technologies.
  • The bloc’s 108­page proposal for the AI Act published two years earlier, included only one mention of the word “chatbot.”
  • By mid­April, however, members of the European Parliament were racing to update those rules to catch up with an explosion of interest in generative AI, which has provoked awe and anxiety since OpenAI unveiled ChatGPT six months ago.
  • Lawmakers now target the use of copyrighted material by companies deploying generative AI tools such as OpenAI’s ChatGPT or image generator Midjourney, as these tools train themselves from large sets of text and visual data on the internet.
  • They will have to disclose any copyrighted material used to develop their systems.

7. AI Governance in the USA and China

  • The rapidly evolving pace of AI development has led to diverging global views on how to regulate these technologies.
  • The U.S. currently does not have comprehensive AI regulation and has taken a fairly hands­off approach.
  • The Biden administration released a blueprint for an AI Bill of Rights (AIBoR).
  • Developed by the White House Office of Science and Technology Policy (OSTP), the AIBoR outlines the harms of AI to economic and civil rights and lays down five principles for mitigating these harms.
  • The blueprint, instead of a horizontal approach like the EU endorses a sector­ specific approach to AI governance, with policy interventions for individual sectors such as health, labor, and education, leaving it to sectoral federal agencies to come out with their plans.
  • On the other end of the spectrum, China over the last year came out with some of the world’s first nationally binding regulations targeting specific types of algorithms and AI.
  • It enacted a law to regulate recommendation algorithms with a focus on how they disseminate information.
For Prelims: Artificial Intelligence, Chat GPT, European law on artificial intelligence (AI), large language models (LLMs) like Generative Pretrained Transformer3 (GPT3) and 4 , EU’s 2018 General Data Protection Regulation (GDPR), AI Bill of Rights (AIBoR), and Office of Science and Technology Policy (OSTP).
For Mains: 1. What is the EU AI Act? Discuss Why should we regulate Artificial Intelligence?
 

Previous year Question

1. With the present state of development, Artificial Intelligence can effectively do which of the following? ( UPSC 2020)

1. Bring down electricity consumption in industrial units
2. Create meaningful short stories and songs
3. Disease diagnosis
4. Text-to-Speech Conversion
5. Wireless transmission of electrical energy
Select the correct answer using the code given below:
A. 1, 2, 3, and 5 only
B. 1, 3, and 4 only
C. 2, 4, and 5 only
D. 1, 2, 3, 4 and 5
Answer: B
Source: The Hindu

Share to Social