AI: A New Frontier with Amplified Risks

Introduction

Artificial Intelligence (AI) is having its moment. In mid-2023, Gartner placed generative AI at the “peak of inflated expectations” on its signature “hype cycle” and predicted that the transformational benefit phase would begin in the next two to five years.[1] Indeed, expectations for the benefits of AI are very high, but so are concerns regarding AI-related risks.

Ever-increasing access to large data sets, faster computing power, and the availability of deep learning frameworks are accelerating the complexity and maturity of AI. Consequently, it is only a matter of time before we transition from weak or narrow AI tools to general or strong AI systems that will apply human-like reason, logic, and problem solving. Some fear the machines will ultimately surpass humans. Current and future AI maturity amplifies many ethical and legal risks leading technologists to focus less on the question of “can we” implement AI, and more on the existential question of “should we”.

This article provides an overview of several benefits and risks of AI, discusses how nations are beginning to regulate it, and recommends practical steps businesses can take now to successfully ride this next wave of technological progress which will likely dominate the next decade and beyond.

The Strategic Impact of AI

Global innovation and application of AI systems are poised to have a critical impact on governments, global industries, and every human being for years to come. These impacts will reshape global and domestic commerce, politics and international relations, and the environment.

Human resources may be the most important asset to a nation’s economy, but data and the ability to use and reuse data are now a close second. It is likely that for the next decade or two, innovation in AI likely will be a key characteristic that distinguishes prosperous, safe, and sustainable nations and economies from those that are poorly resourced, dangerous, and unsustainable. AI is still in its infancy, but it is growing up quickly. From a strategic perspective, the time to embrace AI is now; not next year, next month, or even next week.

Woman briefing medical team with a laptop

AI is already enhancing and speeding up medical discovery.

The Benefits of AI

The benefits of AI systems are seemingly endless. It feels like creative entrepreneurs are offering new AI-driven products and solutions almost daily. To remain concise, here are five examples of how AI systems can improve our lives:

1.    Enhanced Efficiency and Automation: AI excels at automating repetitive tasks, enabling organizations to streamline operations and reduce manual labor. This boosts efficiency and allows humans to focus on more complex, creative tasks requiring emotional intelligence, reasoning, and judgment. In the automotive industry, AI-powered robots can perform tasks such as welding, painting, and mounting parts onto vehicles. These AI informed robots can not only work 24/7 without breaks, but they can detect and adjust to variances in component shape or placement with precision beyond any human capability, ensuring the highest quality standards are met.

2.    Improved Decision Making: AI systems can analyze large datasets far more quickly and accurately than humans. By leveraging data analytics and machine learning (ML), AI can detect insights and patterns that might not be apparent to human analysts, leading to more informed decision-making processes. This capability is particularly beneficial in fields like finance, healthcare, and marketing, where precise, data-driven decisions are critical.

3.    Healthcare Innovation: In the field of vaccine research, AI systems can optimize and rapidly accelerate vaccine formulation. AI can automate the process of screening thousands of potential vaccine candidates. By predicting how different viral proteins will interact with human cells and the immune system, AI models can identify which components of the virus are most likely to produce a strong immune response. This process, which traditionally could take months or years, can be reduced to a matter of days or weeks with AI, allowing for quicker initiation of clinical trials. Moreover, AI can also predict how various adjuvants (substances that enhance the body’s immune response to a vaccine) will interact with the target antigen and the human immune system. This predictive capability seeks to identify the most effective and stable vaccine formulation faster than traditional manual, trial-and-error methods.

4.    Advancements in Education: AI can offer personalized learning experiences, adapting to the individual needs of each student. By analyzing how students interact with materials, AI systems can identify strengths and weaknesses, allowing for the customization of educational content to fit individual learning paces and styles. This can improve student outcomes and engagement, making education more accessible and effective.

5.    Environmental Conservation and Sustainability: AI can play a crucial role in addressing climate change and promoting sustainability. By analyzing environmental data, AI can help humans predict and mitigate natural disasters, optimize energy consumption, and manage waste. To support smart grids, AI can predict energy demand fluctuations and adjust the distribution of electricity accordingly. This not only ensures a stable energy supply but also maximizes the use of renewable energy sources, reducing our reliance on fossil fuels.

Woman holding blocks that spell "fake" or "fact"

Deep fakes are already forbidden under the EU AI Act, the world's first comprehensive AI legislation.

The Potential Risks of AI

Despite all the promising uses of AI, traditional risks are compounded by the complexity of this new technology. Many experts equate the introduction of generative AI to the dawning of the nuclear age. If we have learned anything from the frantic development of ever more powerful nuclear weapons from the mid-1940s to the 1960s, nations and industry must proceed with sober prudence in advancing a novel technology that has the potential to impact so many lives.

In an open letter published on 22 March 2023, leading technologists called for a 6-month pause in developing advanced AI tools more powerful than Open AI’s ChatGPT-4. In introducing the open letter, they posed the following practical, moral, ethical, and even existential questions:

  • Should we let machines flood our information channels with propaganda and untruth?

  • Should we automate away all the jobs, including the fulfilling ones?

  • Should we develop nonhuman minds that might eventually outnumber, outsmart, make obsolete, or replace us?

  • Should we risk the loss of control of our civilization?[2]

Collectively, the technology experts who signed the open letter urged that “such decisions must not be delegated to unelected tech leaders,”[3] and they provided several policy recommendations (see The Future of Regulation below).

On their face, some of these warnings may seem unfounded, overstated, or at least melodramatic. But many concerns about AI are both present and legitimate. Take the AI “black box” problem - essentially, a lack of transparency. Often the creators of general AI systems do not fully understand how an AI model processes decisions from input to output.[4] This fact is disconcerting.

Returning to more concrete threats we can define, Gartner lists some of the top AI-related issues:

1.    Hallucinations, fabrications, and factual errors. These represent one of the most widespread issues with current generative AI chatbot technologies. The training data might produce biased, inaccurate, incorrect, or even malicious responses, particularly if the integrity of the training data is not properly maintained. Moreover, identifying these errors can be difficult, especially as chatbots become more convincing and our reliance on them grows.

2.    Deepfakes. Some may be harmless or amusing. But bad actors can use fake images, videos, and voices to spread misleading information, defame public figures, and spread false statements that appear to be spoken by trusted officials and leaders.

3.    Data Privacy. Privacy issues arise primarily with the inputs and outputs of AI systems and large language models (LLMs). It is very easy to share personal data and other sensitive information by entering it into a generative AI chatbot or other LLM. Once entered, unfortunately, it is inordinately difficult to retrieve or delete. Additionally, there are significant ethical concerns regarding lack of user informed consent when collecting or scraping data, transparency of operations when data are processed in an AI model, and accountability once an output is produced, especially if the output impacts personal freedom, opportunities, or reputation. Privacy law is not new to the use of AI and organizations are already struggling to comply with existing privacy regulations. But as AI becomes more advanced and widespread absent regulation, the privacy risks grow concurrently.

4.    Intellectual Property (IP) Infringements. Generative AI models are trained on enormous datasets that may include protected copyrights and other IP. Without appropriate transparency into how outputs are generated, the onus is on users to determine whether IP rights have been violated.

5.    Cybersecurity Risk. Social engineering and phishing attacks are infinitely more realistic and complex with the aid of AI. Hackers are also using AI to develop malicious code faster and more effectively than ever before. Businesses that use generative AI models are at the mercy of vendors to provide adequate security, which currently relies on vendor red-teaming and a lack of adequate audit tools.[5]

With these benefits and risks in mind, what is being done to regulate AI?

AI Rules: Churn and the Challenge of “Future Proofing”

The primary challenge for national legislatures and industry is to balance the need for innovation with the need for sound regulatory guardrails that mitigate AI-related risks. Around the world, nations are pondering various forms of AI legislation, guidance, and policy to keep up with the velocity and variety of proliferating AI-powered technologies. Industry leaders are signing on to common frameworks or statements of principles.

Stated simply, the current state is one of churn.

The most significant challenge facing regulators is drafting rules that are “future proof”, meaning the rules are specific enough to provide good guidance to developers and to mitigate risks to the public, but general enough to withstand the inevitable maturation, innovation, and growth of AI systems.

No standard approach for regulating AI has been embraced, but some common patterns can be observed.

National Approaches to AI

The current development of enforceable rules for AI can be analogized to the formation of international law; specifically, the development of “soft law” into “hard law.” Soft law is promulgated in the form of principles, guidelines, codes of conduct, and common frameworks that lack legally binding force. Over time, these concepts are codified as legally enforceable hard law in the form of international treaties, national statutes, and government regulations.

AI Strategies

To date, several countries have adopted national AI strategies:

  • Seeking to become a global leader in AI by 2030, the People’s Republic of China authored a New Generation Artificial Intelligence Development Plan[6] in 2017.

  • Canada was one of the first countries to launch a national AI strategy with the Pan-Canadian Artificial Intelligence Strategy.[7]

  • The UK’s approach to AI regulation is guided by two policy papers, the AI Sector Deal[8] and the National AI Strategy.[9]

AI Principles

Non-governmental bodies, such as the Organization for Economic Co-operation and Development (OECD), have articulated AI Principles.[10] The OECD AI Principles were reaffirmed by the G7 member economies at the 2023 Hiroshima Summit.

The EU has become a market leader in technology regulation.

The Brussels Effect 2.0 | The European Union (EU) AI Act

The EU established itself as the world leader in data protection regulation (data privacy in US parlance) when the Global Data Protection Regulation (GDPR) became binding in May 2018. Some parts of the GDPR apply to AI systems. For example, Article 22 concerns automated individual decision-making, including profiling, which is a core feature of AI’s predictive analytical capability.

The GDPR harmonizes privacy law across all EU member states, applies extraterritorially to businesses that collect data on EU data subjects, and sets a gold standard that many nations have copied (sometimes almost verbatim). This influential, first-mover approach has been coined “the Brussels Effect”.

In 2024, the EU again moved first with the EU AI Act, the world’s first comprehensive AI legislation. The AI Act adopts a risk-based approach, bans certain uses of AI, provides rules for general purpose AI systems, creates a new regulatory scheme for oversight, and provides costly penalties for violations. The AI Act can be summarized as follows:

  • Risk-based approach to AI:

    • Unacceptable risk: a ban on certain types of AI systems (e.g., no social scoring, nor AI-fueled real-time biometric identification systems).

    • High-Risk: the EU will maintain a list of high-risk AI system

    • Limited Risk and Minimal Risk: certain transparency obligations apply.

The Act defines specific obligations for AI users and providers of high-risk applications.

  • Conformity assessments are required before AI systems are put into service or placed on the market.

  • Specific requirements for general purpose AI (GPAI) systems that pose systemic risk.

  • Penalties and enforcement against companies that harm consumers (maximum fine up to 7% of global gross income).

  • Governance structures at EU and Member State levels.

United States | The NIST AI Risk Management Framework

As with data protection and privacy law, the US has adopted a more decentralized approach to AI regulation, emphasizing innovation and the competitive advantages of AI. The US National AI Initiative Act[11] aims to support AI research and policy development across various sectors.

While there is no comprehensive federal AI regulation, several guidelines and principles have been issued by different agencies, focusing on issues like privacy, bias, and national security. The US government also encourages self-regulation and the development of industry standards.

On 23 January 2023, the National Institute of Standards and Technology (NIST) issued the AI Risk Management Framework (RMF), along with a companion AI RMF Playbook, explainer video, Roadmap, and Crosswalk. In the absence of comprehensive legislation, the AI RMF provides an approach developed by the private and public sectors in partnership to incorporate trustworthiness into the design, development, use, and evaluation of AI products, services, and systems.[12] As with the NIST Cybersecurity Framework and companion Privacy Framework, the AI RMF provides an approach involving a “core” and use case “profiles” to facilitate consistent compliance.

The Future of Regulation

Legislators and policymakers now have a growing list of national strategies, guidance frameworks, and even an example of comprehensive legislation to consider when forming national laws for AI. As they debate these rules, they may refer to the policy recommendations articulated by the authors of the open letter in 2023. Most of these recommendations appear in some form in the EU AI Act.

1.    Mandate robust third-party auditing and certification for specific AI systems.

2.    Regulate organizations’ access to computational power.

3.    Establish capable AI agencies at the national level.

4.    Establish liability for AI-caused harms.

5.    Introduce measures to prevent and track AI model leaks.

6.    Expand technical AI safety research funding.

7.    Develop standards for identifying and managing AI-generated content and recommendations.[13]

People around a conference table celebrating a successful transaction.

Businesses that figure out how to leverage AI now will be poised to exploit a significant business advantage for the next two decades and beyond.

What steps can businesses take?

It is easy to be overwhelmed by the pressure to start leveraging AI now, especially given the current grey area in AI-related law and regulation. This time can be viewed as either an insurmountable challenge or a golden opportunity.

Gartner proposes the following strategic planning assumptions for AI:

  • By 2026, organizations that operationalize AI transparency, trust and security will see their AI models achieve a 50% improvement in terms of adoption, business goals and user acceptance.

  • By 2026, enterprises that have adopted AI engineering practices to build and manage adaptive AI systems will outperform their peers in the number and time it takes to operationalize AI models by at least 25%.

  • By 2027, at least two vendors that provide AI risk management functionality will be acquired by enterprise risk management (ERM) vendors providing broader functionality.

  • By 2027, at least one global company will see its AI deployment banned by a regulator for noncompliance with data protection or AI governance legislation. [14]

Excellent. What should you do? The following recommendations are a great place to start:

1.    Begin with the basics: data governance.

2.    Select an AI risk management framework and incorporate AI into existing GRC programs.

3.    Stay current: dedicate resources to get smart on best practices, tools, and vendors that help organizations leverage AI and mitigate risk.

1.   Begin with the Basics: Data Governance

In the digital economy, data is the new oil. [15] The value and importance of data have never been greater.

So why have so many businesses not organized their data? There may be several reasons, but one answer is that data governance is hard, it costs money, it takes time, and it’s not particularly fun to do. But organizations that invest in good data governance will] generate more revenue, deliver higher impact outcomes, and significantly improve ERM.

There is no commonly agreed upon definition of data governance, but fundamentally it consists of developing a centralized structure to identify data, measure risk, and devise strategies to protect the data from misuse. Organizations should consider the following four key components:

1.    Data Classification. This step begins with forming multidisciplinary teams (e.g., engineering, IT, legal, product management, security, HR, and others) to identify, locate, and rank all organizational data based on risk and usage so that it can be processed and protected appropriately. Organizations create a business data glossary that is shared, commonly referenced, and continually updated.

2.    Data Inventory. Data inventory leverages data classification to map all the data stored and processed in the organization’s systems. Machine readable tags are assigned to all data based on classification. The outcome is that organizational data becomes searchable and, therefore, usable. Data can be mapped to associated risks (e.g., cybersecurity, privacy) and other compliance priorities (e.g. financial audit).

3.    Data Retention Policies. Data retention policies are crucial for managing an organization's data lifecycle, ensuring that data is retained only as long as needed for legal, regulatory, and business purposes. Organizations define clear retention periods based on legal requirements and business needs and establish regular audits to ensure compliance with the policy.

4.    Data Deletion. Data should be used only for its intended purpose. When data is no longer needed or the retention period expires, organizations should apply tools to securely delete or de-identify data. It is important to train employees on their roles in adhering to these policies and to update the policies as regulations and business needs evolve.

Data governance is a crucial fundamental step that public and private organizations should take not only to leverage their data to achieve a public interest or commercial strategy, but also to enable compliance with data protection, privacy, and unique industry regulations. Moreover, good data governance advances data quality, which in turn improves the performance of AI models. Data quality is truly critical. AI models deliver optimal value when the supporting data are skillfully transformed and labeled.  

2.   Select an AI risk management framework and incorporate AI into existing GRC programs.

Organizations should identify, shape, and use an AI risk management framework that not only complements the organization’s approach to AI regulatory compliance in their jurisdiction, but also complements the organization’s current approach to risk management.

For companies doing business in the US, the National Institute of Standards and Technology (NIST) AI Risk Management Framework (RMF) is an excellent starting point. The AI RMF 1.0, released in January 2023, is the product of years of cooperation among government, industry, non-profits, and academia to produce a voluntary resource “for organizations designing, developing, deploying, or using AI systems to help manage the many risks of AI and to promote the trustworthy and responsible development and use of AI systems.”[16]

The AI RMF was designed to be general enough to apply across multiple domains and use cases, and for organizations of all sizes and sectors, whether private or public. The AI RMF discusses various risk management challenges, including risk measurement, risk tolerance, risk prioritization, and risk integration and management.  Regarding future proofing, the AI RMF is intended to be practical and adaptive to the AI landscape as technologies mature.

Part I of the AI RMF focuses on how organizations can frame AI risks and outlines the characteristics of a trustworthy AI system::

  • Reliable

  • Safe

  • Secure and resilient

  • Accountable and transparent

  • Explainable and interpretable

  • Privacy enhanced

  • Fair with their harmful biases managed

Part 2 of the AI RMF comprises the “Core”, which is a construct familiar to anyone familiar with the NIST Cybersecurity Framework and Privacy Framework. The Core is comprised of four specific functions that guide organizations to address AI system risks in practice:

1.    Govern: The Govern function ensures a culture of risk management is cultivated and present in the organization. The Govern function is continuous and applies to all stages of an organization’s AI risk management processes and procedures.

2.    Map: The Map function establishes the context to frame risks related to an AI system. Information gathered during the map function informs an organization’s decisions about whether to employ an AI system and how to identify potential risks. Mapping is foundational to supporting the measure and manage functions.

3.    Measure: The Measure function employs quantitative, qualitative, or mixed-method tools, techniques, and methodologies to analyze, assess, benchmark, and monitor AI risk and associated impacts.

4.    Manage: The Manage function allocates risk resources to mapped and measured risks regularly and as defined by the Govern function. The manage function includes plans to respond to, recover from, and communicate about negative incidents or events. Upon completing the Manage function, organizational plans for prioritizing risk and processes for monitoring improvement will be defined and implemented.

Adopting an approach set forth in the AI RMF will help an organization comply with future legal and regulatory obligations, or in the case of companies using AI in Europe or targeting EU consumers, the EU AI Act.

Additionally, frameworks like the NIST AI RMF should be tailored to complement existing GRC programs. For mature organizations (i.e., those with experience in data governance and statutory compliance with cybersecurity and privacy regulations) many of the processes and expertise may already reside within the organization, or they may complement existing services provided by vendors. For less mature organizations, following a widely used AI framework serves to guide and enhance the creation and maturity of the organization’s GRC program.

3.   Stay Current: dedicate resources to get smart on best practices, tools, and vendors that help organizations leverage AI and mitigate risk.

It is essential that organizations dedicate sufficient resources to build internal competencies in AI-related activities. This includes knowledge about how the organization leverages AI, including the development, training, testing, marketing, and deployment of AI-enhanced systems. Because AI technology is advancing so rapidly, organizations must also develop the ability to identify new trends, tools, and services with the goal of ensuring that the state-of-the-art and latest best practices are applied to their AI-related products.  

Organizations subject to the EU AI Act will be required to take measures to ensure a sufficient level of “AI literacy” exists among their own staff, as well as other people operating and using AI systems on their behalf.

Certification organizations, such as the International Association of Privacy Professionals, have developed new credential program for AI Governance Professionals.[17] Such credentialing programs serve to establish and mature a community of professionals that collectively advance knowledge, learning, experience, and shared best practices. Hiring credentialed employees or consultants also represents an honest and reasonable effort by organizations to comply with their regulatory obligations.

In the meantime, organizations should consult reputable experts on how to immediately begin applying sound practices for leveraging AI systems.

4.   Near Term Best Practices.

Organizations that plan to integrate AI systems should consider the following near-term best practices.

  • Seek Professional Advice. The first near-term best practice is to seek advice from an AI governance professional who can help your organization document its AI use cases, identify applicable regulatory obligations, and design a strategy and policies to ensure the organization meets its GRC objectives when embracing AI systems and tools.

  • Update GRC Policies. As part of any GRC program, develop clear policies regarding the appropriate use of AI tools. Currently, anyone can access large language models (LLMs) like Open AI’s ChatGPT and other generative AI tools. There are no restrictions on what information can be injected into those tools. Employees should be prohibited from using prompt injections that expose proprietary information, protected IP, classified or confidential information, or personally identifiable information (personal data).

  • Prompt engineering. Define acceptable AI prompt engineering practices. Create a library of permitted prompt injections to ensure sensitive internal or personal data are never used on third-party infrastructure. Appropriately engineered prompts may be stored as immutable assets, which can be reused, improved, shared, and even sold.

  • Cybersecurity controls. Leverage existing cybersecurity controls and dashboards to identify unsanctioned use of AI chatbots and similar tools.

  • Firewalls. Update firewalls to block enterprise user access to unauthorized tools, and leverage event management systems to monitor logs for violations. Use web gateways to monitor unauthorized API calls.

5.   Keep Tabs on the AI Solutions Market.

Organizations that deploy AI systems must vigilantly monitor AI models and applications to ensure compliance, fairness, and trust, regardless of whether they are developed organically or are licensed from third parties. This is not an easy task.

For example, one acute challenge involves the handoff of AI models among the development and operations (DevOps) and other engineering teams. If sensitive data is inadvertently used, it can cause severe delays and trigger significant reengineering.

KitOps[18] is an open-source machine learning operations (MLOps) project that seeks to standardize the packaging, reproduction, deployment, and tracking of AI/ML models so they can be run anywhere, just like application code.

KitOps leverages industry standards (e.g. Open Container Initiative (OCI) compliant images) to modularize assets within an AI model development pipeline and creates a trusted chain of custody. If one element of the chain is proven to be faulty (i.e., sensitive data is found in a dataset), that specific, immutable ModelKit can be removed and replaced with less disruption.

This immutable modularization improves workflow, collaboration, model traceability, and reproducibility. Furthermore, it supports a well-designed risk management program.

Projects like KitOps should be on the radar of any organization that leverages AI systems. Organizations with AI-literate staff should have the wherewithal to engage internal teams and external vendors about whether they use a similar trustworthy approach to AI model development.

Woman using an AI chat bot

Today it's chat bots. Tomorrow AI will be used to automate many more applications.

Conclusion: boldly and systematically explore an AI-enhanced future

AI systems are rapidly maturing and creating value. This is a unique moment in history when organizations can embrace a new technology that is anticipated to revolutionize the way businesses and governments deliver goods and services.

But the risks associated with AI are real. Significant effort has already been made to ensure the trustworthiness of AI systems. It is imperative that businesses embrace sound data and AI governance into existing GRC programs, policies, and processes.

The time to act is now, but it is also prudent to consume this AI elephant one achievable bite at a time. Organizations must not forget the fundamentals. Data governance is foundational not only to leveraging AI, but also to comply with existing privacy and cybersecurity regulations.

With this in mind, it is helpful to consider the Japanese principle of Kaizen. Kaizen is an incremental, continuous improvement philosophy that involves buy-in and participation from the entire organization, from management to line workers. Organizations will not become fully literate experts in AI overnight. This will take time, but the time to act is now.

Some mistakes and failures are inevitable, but these are significantly mitigated by creating a solid foundation of good governance and establishing an organizational culture that values human impact, attention to detail, and a willingness to improve.

If you want more information about how to mature your data protection, privacy, and AI governance, risk, and compliance programs, please reach out for a free consultation. 1GDPA assists organizations that need professional advice on securing and leveraging their data in a responsible and legally compliant manner. 


###


Citations:

[1] Gartner Places Generative AI on the Peak of Inflated Expectations on the 2023 Hype Cycle for Emerging Technologies, 16 August 2023, available at: https://www.gartner.com/en/newsroom/press-releases/2023-08-16-gartner-places-generative-ai-on-the-peak-of-inflated-expectations-on-the-2023-hype-cycle-for-emerging-technologies#:~:text=Generative%20artificial%20intelligence%20(AI)%20is,within%20two%20to%20five%20years.

[2] Pause Giant AI Experiments: An Open Letter, Future of Life Institute, 22 March 2023, available at: https://futureoflife.org/open-letter/pause-giant-ai-experiments/ and https://futureoflife.org/wp-content/uploads/2023/04/FLI_Policymaking_In_The_Pause.pdf.

[3] Ibid.

[4] Walmsley, J. Artificial intelligence and the value of transparency. AI & Soc 36, 585–595 (2021). https://doi.org/10.1007/s00146-020-01066-z; Levy, Steven, AI Is a Black Box. Anthropic Figured Out a Way to Look Inside, 21 May 2024, available at: https://www.wired.com/story/anthropic-black-box-ai-research-neurons-features/.

[5] Why Trust and Security are Essential for the Future of Generative AI, Gartner Q&A with Avivah Litan, 20 April 2023, available at: https://www.gartner.com/en/newsroom/press-releases/2023-04-20-why-trust-and-security-are-essential-for-the-future-of-generative-ai.

[6] A New Generation Artificial Intelligence Plan, available at: https://digichina.stanford.edu/work/full-translation-chinas-new-generation-artificial-intelligence-development-plan-2017/.

[7] The Pan-Canadian AI Strategy, available at: https://cifar.ca/ai/.

[8] The UK AI Sector Deal, available at: https://www.gov.uk/government/publications/artificial-intelligence-sector-deal/ai-sector-deal.

[9] The UK National AI Strategy, available at: https://www.gov.uk/government/publications/national-ai-strategy.

[10] OECD AI Principles, available at: https://oecd.ai/en/ai-principles.

[11] National Artificial Intelligence Initiative Act of 2020, available at: https://www.congress.gov/bill/116th-congress/house-bill/6216.

[12] NIST AI Risk Management Framework, available at: https://www.nist.gov/itl/ai-risk-management-framework.

[13] Policymaking in the Pause, available at: https://futureoflife.org/wp-content/uploads/2023/04/FLI_Policymaking_In_The_Pause.pdf.

[14] What is the future of artificial intelligence and AI technologies?, available at: https://www.gartner.com/en/topics/artificial-intelligence.

[15] Data is the New Oil of the Digital Economy, Wired, available at: https://www.wired.com/insights/2014/07/data-new-oil-digital-economy/.

[16] NIST AI 100-1, AI Risk Management Framework, National Institute of Standards and Technology, p. 2, available at: https://nvlpubs.nist.gov/nistpubs/ai/NIST.AI.100-1.pdf.

[17] Artificial Intelligence Governance Professional, IAPP Certification, available at: https://iapp.org/certify/aigp/.

[18] KitOps, available at: https://kitops.ml/?_from=button.

Previous
Previous

California Protects Neural Data

Next
Next

The EU AI Act is Official