Number #1
Law Firm in NY
Have Won Over
30+ Law Firm Awards
Trusted By
10,000+ Clients
Free consultant

A Global Legal Framework for Regulating Artificial Intelligence: Importance and Implications

The emerging capabilities of artificial intelligence (‘AI’) are transforming the world at an unprecedented pace. AI is revolutionizing every sector including automotive, telecom, retail, healthcare, education, financial services, agriculture, and other such industries.  The significant growth in AI in recent times, can be attributed to the advancements in internet technologies, exponential data growth, increasing computing power and technologies such as cloud computing and edge computing. The market for AI is huge and is also growing rapidly, with predictions of it reaching $500 billion by 2024, with a compound annual growth rate (CAGR) of 17.5%.[i]

With new advancements and opportunities, AI also brings along certain risks. They include low data quality, data biases, data privacy, data security, inaccurate algorithms, etc. along with unethical use of AI.

AI and deep learning models may be difficult to understand, even for those who work directly with this technology. This eventually leads to a lack of transparency on how and why AI comes to its conclusions, creating a lack of explanation for what data AI algorithms use, or why they may make biased or unsafe decisions. Although, the effects of AI are not yet completely known to us, some of these can be classified.  There are risks that may come from direct effects, such as lack of transparency, job loss due to AI automation, social manipulation through algorithms, loss of privacy and social surveillance with AI technology, lack of data privacy, socioeconomic inequality and discrimination, autonomous weapons powered by AI, loss of human influence, false negatives and false positives among other things. However, these risks can be mitigated by developing specific legal regulations to govern AI and creating organizational AI standards.

GOVERNANCE OF AI AND DEVELOPMENTS ACROSS GLOBE

Over a period of time, several jurisdictions across the globe have come forward to cater to the issues that are clubbed with the usage of AI. In this article, we have tried to cover how AI is being governed globally in several countries including India and what steps are taken in different jurisdictions to counter the risk associated with the usage of AI.

European Union

The European Commission (EU) proposed an EU regulatory framework on AI in April, 2021. The draft EU Artificial Intelligence Act (‘AI Act’) was the first ever attempt to enact a regulation to govern AI. The proposed AI Act would apply to any AI system used within the EU. Although, it is not yet in force, it provides a clear insight into the future of AI regulation. The proposed act divides AI systems into three categories: (a) unacceptable-risk AI systems; (b) high-risk AI systems; and (c) limited- and minimal-risk AI systems.

Unacceptable-risk AI systems include:

  1. subliminal, manipulative, or exploitative systems that cause harm
  2. real-time, remote biometric identification systems used in public spaces for law enforcement
  3. all forms of social scoring such as AI or technology that evaluates an individual’s trustworthiness based on social behavior or predicted personality traits.[ii]

High-risk AI systems include those that evaluate consumer creditworthiness, assist in recruiting or managing employees, or use biometric identifications. Under the proposed AI Act, the EU would review and potentially update the list of systems included in this category on an annual basis.

Limited- and minimal-risk AI systems include many AI applications currently used throughout the business world such as AI chatbots and AI-powered inventory management. If an AI system uses EU data but does not fall within one of these categories, it would not be subject to the draft AI Act. The system would, however, be subject to the General Data Protection Regulation (GDPR).

By means of a press release on December 09, 2023, the European Parliament and the Council have finalized the rules on the AI Act. The draft act aims to ensure the safety and alignment of the use of the AI system in the EU with the fundamental rights and values and to provide an opportunity for further innovation and investment in the field of AI in the EU.[iii]

United Kingdom

In March 2023, the United Kingdom (UK) government, published a policy paper titled ‘A pro-innovation approach to AI regulation’.[iv]

This policy paper is supported by the following set of principles:

(a) Context-specific: Rules or risk levels should not be assigned to entire sectors or technologies. Instead, these will be regulated based on the outcomes AI is likely to generate in particular applications.

(b) Pro-innovation and risk-based: Regulators should pay more attention to real threats rather than minor ones related to AI.

(c) Cross-sectoral principles: To guide responsible development and use of AI in all sectors of the economy – (i) Safety, security and robustness; (ii) Appropriate transparency and explaining ability; (iii) Fairness; (iv) Accountability and governance; (v) Contestability and redress.

(d) Proportionate and adaptable: These principles will be issued on a non-statutory basis and will be interpreted and implemented in practice by existing regulators such as the Office of Communication (Ofcom) or the Competition and Markets Authority.

UK’s House of Commons Innovation and Technology Committee (‘Committee’) published an interim report on August 31, 2023 examining different approaches to regulate AI in the UK. This report recommended that the government introduce a ‘tightly focused AI Bill’ in the next parliamentary session to position the UK as an AI governance leader globally.

Australia

Australia’s AI Ethics Framework (Ethics Principles) was published in November, 2019.[v]

(a) Human, societal and environmental well-being

(b) Human-centered values

(c) Fairness

(d) Privacy protection and security

(e)Reliability and safety

(f) Transparency and explaining ability

(g) Contestability

(h) Accountability[vi]

The principles are voluntary. They were designed to prompt organizations to consider the possible and actual impacts of using AI enabled systems. If an AI system is being used to make decisions or in other ways have any significant impact, both positive or negative, on the people, the environment or society, these principles should be made applicable. If the AI does not involve or affect human beings, these principles may not be required to be considered.

United States of America

In April 2020, the Federal Trade Commission (FTC) published five principles that companies should follow when using AI and algorithms[vii]

(a) be transparent with consumers about their interaction with AI tool

(b) clearly explain decisions that result from the AI

(c) ensure that decisions are fair

(d) ensure that the data and models being used are robust and empirically sound

(e) hold themselves accountable for compliance, ethics, fairness and non-discrimination[viii]

The government has also been working on the AI bill, published in October, 2022, providing a roadmap for the effective and responsible use of AI (‘AI Bill of Rights’). This AI Bill of Rights highlighted five basic principles to regulate the development of AI. These principles were:

(i) protecting users of AI from unsafe and ineffective systems

(ii) anti-discrimination protections

(iii) protecting against abuse of personal data

(iv) providing explanation and notice of the automated outcomes

(v) human alternatives and fallback mechanisms

In April 2023, the department of commerce invited public opinion on regulating AI systems and in October 2023, the government passed an executive order on AI for making it safer, more secure and trustworthy. The order put forth a few guiding principles for regulating AI, which are as follows:

(i) ensure safe and secure AI

(ii) promote innovation and competition in relation to AI

(iii) supporting workers

(iv) advancement of equity and civil rights

(v) protecting the users of AI

(vi) protecting privacy; strengthening American leadership

(viii) advancement in government’s use of AI system[ix]

OECD

The Organization for Economic Co-operation and Developments (OECD), adopted the OECD AI Principles in May 2019. These principles were endorsed by 42 countries, aimed at promoting the use of AI that is innovative, trustworthy and that respects human rights and democratic values.[x]

(a) Inclusive growth, sustainable development and well-being

(b) Human-centered values and Fairness

(c) Transparency and Explaining ability

(d) Robustness, Security and Safety

(e) Accountability

OECD has further recommended national policies and international co-operation for trustworthy AI with special attention to small and medium-sized enterprises. These recommendations include-

(i) investing in AI Research and Development

(ii) fostering a Digital Ecosystem for AI

(iii) shaping an enabling policy environment for AI

(iv) building human capacity and preparing for labor market transformation

(v) international co-operation for trustworthy AI

India

NITI Aayog (National Institution for Transforming India)

In June 2018, NITI Aayog released a discussion paper on the National Strategy for Artificial Intelligence (NSAI).[xi] The strategy document coined the term ‘AI for All’ and was intended to be the governing benchmark for the development and deployment of AI in India. Towards promoting the development as well as adoption of AI, this discussion paper made broad recommendations on supporting and nurturing the AI ecosystem in India under four heads:

(a) promotion of research

(b) skilling and reskilling of the workforce

(c) facilitating the adoption of AI solutions

(d) the development of guidelines for ‘Responsible AI’.

While underlining the role of private sector, NITI Aayog has identified a few priority sectors such as healthcare, agriculture, education, smart cities and smart mobility to encourage AI deployment.

In February 2021[xii] and August 2021[xiii], NITI Aayog released two-part approach papers on ‘Principles of Responsible AI (RAI)’, identifying the principles for responsible development and deployment of AI in India and setting out enforcement mechanisms for the operationalizing these principles. These principles are:

  • safety and reliability;
  • inclusivity and non-discrimination;
  • equality;
  • privacy and security;
  • transparency;
  • accountability and protection; and
  • and reinforcement of positive human value.

In the context of the regulation, the papers recommend a risk-based mechanism for regulating AI. In the context of the regulation, the papers recommend a risk-based mechanism for regulating AI.

The regulation aimed to proportionate the risk to the possibility of any harm that can be a probable outcome from AI. The proposed papers also advocated for sandboxing and controlled deployments among the others, as well as the adoption of specific policy interventions to counter the risks associated with AI. While the respective sectoral regulators are required to oversee the AI developments in their particular fields, in cases where the risk is comparatively low, the government may prefer that the market players operate on self-regulating the associated risks.

Digital Personal Data Protection Act 2023

Currently, personal data is governed under the Information Technology (IT) Act, 2000 and the rules framed thereunder, however, the Indian government has passed its standalone data protection law, the Digital Personal Data Protection Act, 2023 (‘Act’) which will soon be enforceable in India. This Act would largely apply to all the AI developers involved in the development and facilitation of AI technology in India.

As per the Act, the personal data of an individual may only be processed for a lawful purpose for which an individual has given its consent with an affirmative action. Under the Act, any person who alone or in conjunction with other persons determines the purpose and means of processing of personal data (‘Data fiduciaries’) will be obligated to maintain the accuracy and security of data. The Act also further grants certain rights to individuals including the right to obtain information, seek correction / erasure, and grievance redressal etc.

Other Steps Taken in India

Currently, India does not have an overarching regulatory framework for using AI systems. However, certain sector specific frameworks have been identified for the development and use of AI. In finance, the Securities and Exchange Board of India (‘SEBI’) issued a circular in January 2019, to stockbrokers, depository participants, recognized stock exchanges and depositories and in May 2019, to all mutual funds (MFs)/asset management companies (AMCs)/trustee companies/board of trustees of mutual funds/Association of Mutual Funds in India (AMFI) on reporting requirements for AI applications and systems offered and used. The reporting is to create an inventory of AI systems in the market and guide future policies[xiv].

The Indian Council of Medical Research (ICMR) released “Ethical Guidelines for AI in Biomedical Research and Healthcare” in June 2023.[xv] The purpose of the guideline is to provide an ethics framework that can assist in the developing, deploying and adopting AI-based solutions for biomedical research and healthcare delivery. The guidelines are intended to foster trust and collaboration among the stakeholders involved in the development and deployment of AI in biomedical research and healthcare.

In June 2023, NASSCOM published guidelines for the responsible implementation of Generative AI[xvi], to ensure its responsible adoption. (“Generative AI”) is a type of AI technology that can produce verities of content, including texts, audio, imagery or synthetic data. These guidelines primarily center on the research, development, and utilization of Generative AI. The main objective of these guidelines is to encourage and support the responsible advancement and application of Generative AI solutions by various stakeholders.

The Indian government had recently proposed enacting the Digital India Act (“DIA”) to give a global legal framework for India’s evolving digital ecosystem. The Ministry of Electronics and Information Technology (“MEITY”) held consultations to discuss the essential features and legal framework of DIA with different stakeholders. In accordance with the consultations, the draft of the DIA will have legal framework and principles intact and the core constituents of the DIA will be online safety, trust and accountability, open internet, and regulations of new age technologies like artificial intelligence and blockchain technologies.

Speculations of governance of AI under the DIA may include (a) safeguarding innovations to enable emerging technologies like AI/ML, Web 3.0, autonomous systems/ robotics, IoT/ distributed ledger/ blockchain, quantum computing, virtual reality/augmented reality, real-time language translators, natural-language processing, among the other things; (b) defining and regulating hi-risk AI systems through legal, institutional quality testing framework to examine regulatory models, algorithmic accountability, zero-day threat &amp, vulnerability assessment, examine AI based ad-targeting and content moderation; and (c) Accountability for upholding constitutional rights of the citizens; ethical use of AI based tools to protect rights or choices of users;  provision of deterrent, effective, proportionate and dissuasive penalties, etc.

Bletchley Declaration (AI Safety Summit)

On November 01, 2023, India, China, the United Kingdom, the United States of America, and 24 other nations along with the European Union signed a declaration (‘Bletchley Declaration’) recognising risks associated with AI, that could be termed as ‘catastrophic’.[xvii] The signatories of the Bletchley Declaration have resolved to ‘work together in an inclusive manner to ensure human-centric, trustworthy and responsible AI that is safe’ at the United Kingdom’s AI safety summit. The declaration focuses on tackling the risks of frontier AI to “identify AI safety risks of shared concerns, building a shared scientific and evidence-based understanding,” and “building respective risk-based policies across countries to ensure safety.” 

CONCLUSION

The global framework of AI governance is still complex and evolving constantly. The risks involved with AI usage are multifaceted and recognizing these challenges and risks is crucial for us to understand to formulate the initiatives to harness the potential benefits of AI. While India has rapidly emerged as a global player in regulating AI and machine learning, the upcoming DIA will put forth more stringent recommendations and guidelines for its governance.

In totality, AI governance is a pressing global concern for every jurisdiction and therefore will require cooperation as well as collaboration from the global leaders. The journey towards a more specific framework is far from over, however, by the means of research, collaborative efforts and mutual dialogues, we can steer AI towards the potential benefits that it entails.

Authors: Hemant Srivastava

Publication Date: January 2, 2024

Endnotes

[i] Telecom Regulatory Authority of India – Consultation Paper, 2022
https://www.trai.gov.in/sites/default/files/CP_05082022.pdf

[ii] Regulatory Framework Proposed on Artificial Intelligence –
https://digital-strategy.ec.europa.eu/en/node/9745/printable/pdf

[iii] Artificial Intelligence Act: Council and Parliament strike a deal on the first rules of AI in the world –
https://www.consilium.europa.eu/en/press/press-releases/2023/12/09/artificial-intelligence-act-council-and-parliament-strike-a-deal-on-the-first-worldwide-rules-for-ai/

[iv] Department of Science, Innovation and Technology, United Kingdom –
https://assets.publishing.service.gov.uk/government/uploads/system/uploads/attachment_data/file/1176103/a-pro-innovation-approach-to-ai-regulation-amended-web-ready.pdf

[v] Australia’s AI Ethics Principles –
https://www.industry.gov.au/publications/australias-artificial-intelligence-ethics-framework/australias-ai-ethics-principles

[vi] Ibid

[vii] Using Artificial Intelligence and Algorithms – Federal Trade Commission – https://www.ftc.gov/business-guidance/blog/2020/04/using-artificial-intelligence-and-algorithms

[viii] Ibid

[ix] Executive Order on Safe, Secure and Trustworthy Development and Use of Artificial Intelligence – https://www.whitehouse.gov/briefing-room/presidential-actions/2023/10/30/executive-order-on-the-safe-secure-and-trustworthy-development-and-use-of-artificial-intelligence/

[x] OECD AI Principles Overview – https://oecd.ai/en/ai-principles

[xi] NITI Aayog’s National Strategy for Artificial Intelligence – Discussion Paper – https://indiaai.gov.in/documents/pdf/NationalStrategy-for-AI-Discussion-Paper.pdf

[xii] Responsible for AI – Part 1 – https://www.niti.gov.in/sites/default/files/2021-02/Responsible-AI-22022021.pdf

[xiii] Responsible for AI – Part 2 – https://www.niti.gov.in/sites/default/files/2021-08/Part2-Responsible-AI-12082021.pdf

[xiv] Securities and Exchange Board of India – Reporting for AI and ML applications and systems offered and used by market intermediaries – https://www.sebi.gov.in/legal/circulars/jan-2019/reporting-for-artificial-intelligence-ai-and-machine-learning-ml-applications-and-systems-offered-and-used-by-market-intermediaries_41546.html

[xv] ICMR – Ethical guidelines for applications of artificial intelligence in biomedical research and healthcare – https://main.icmr.nic.in/sites/default/files/upload_documents/Ethical_Guidelines_AI_Healthcare_2023.pdf

[xvi] NASSCOM – Responsible AI- Guidelines for generative AI – https://www.nasscom.in/ai/img/GenAI-Guidelines-June2023.pdf

[xvii] Policy Paper – Bletchley Declaration by countries – https://www.gov.uk/government/publications/ai-safety-summit-2023-the-bletchley-declaration/the-bletchley-declaration-by-countries-attending-the-ai-safety-summit-1-2-november-2023

Related Posts