Advertisement
  1. SEJ
  2.  ⋅ 
  3. Generative AI

Google Policy Agenda Reveals AI Regulation Wishlist

Google's AI policy agenda largely ignores open source AI and focuses on limiting restraints on innovation

Google Policy Agenda Reveals AI Regulation Wishlist

Google published an AI Policy Agenda paper that outlines a vision for responsible deployment of AI and suggestions for how governments should regulate and encourage the industry.

Google AI Policy Agenda

Google announced the publication of an AI policy agenda with suggestions for responsible AI development and regulations.

The paper notes that government AI policies are independently forming around the world and calls for a cohesive AI agenda that strikes a balance between protecting against harmful outcomes while getting out of the way of innovation.

Google writes:

“Getting AI innovation right requires a policy framework that ensures accountability and enables trust.

We need a holistic AI strategy focused on:

(1) unlocking opportunity through innovation and inclusive economic growth;

(2) ensuring responsibility and enabling trust; and

(3) protecting global security.

A cohesive AI agenda needs to advance all three goals — not any one at the expense of the others.”

Google’s AI policy agenda has three core objectives:

  1. Opportunity
  2. Responsibility
  3. Security

Opportunity

This part of the agenda asks governments to be encouraging of the development of AI by investing in:

  • Research and development
  • Creating a friction-less legal environment that unfetters the development of AI
  • Planning the educational support for training an AI-ready workforce

In short, the agenda is asking governments to get out of the way and get behind AI to help advance technology.

The policy agenda observes:

“Countries have historically excelled when they maximize access to technology and leverage it to accomplish major public objectives, rather than trying to limit technological advancement.”

Responsibility

Google’s policy agenda argues that responsible deployment of AI will depend on a mixture of government laws, corporate self-regulation and input from non-governmental organizations.

The policy agenda recommends:

“Some challenges can be addressed through regulation, ensuring that AI technologies are developed and deployed in line with responsible industry practices and international standards.

Others will require fundamental research to better understand AI’s benefits and risks, and how to manage them, and developing and deploying new technical innovations in areas like interpretability and watermarking.

And others may require new organizations and institutions.”

The agenda also recommends:

“Encourage adoption of common approaches to AI regulation and governance, as well as a common lexicon, based on the work of the OECD. “

What is OECD?

The OECD is the OECD.AI Policy Observatory, which is supported by corporate and government partners.

The OECD government stakeholders include the US State Department and the US Commerce Department.

The corporate stakeholders are comprised of organizations like the Patrick J McGovern Foundation, whose leadership team is stacked with Silicon Valley investors and technology executives who have a self-interest in how technology is regulated.

Google Advocates Less Corporate Regulation

Google’s policy recommendation on regulation is that less regulation is better and that corporate transparency could hinder innovation.

It recommends:

“Focusing regulations on the highest-risk applications can also deter innovation in the highest-value applications where AI can offer the most significant benefits.

Transparency, which can support accountability and equity, can come at a cost in accuracy, security, and privacy.

Democracies need to carefully assess how to strike the right balances.”

Then later it recommends taking efficiency and productivity into consideration:

“Require regulatory agencies to consider trade-offs between different policy objectives, including efficiency and productivity enhancement, transparency, fairness, privacy, security, and resilience. “

There has always been, and will always be, a tug of war between corporate entities struggling against oversight and government regulators seeking to protect the public.

AI can solve humanities toughest problems and provide unprecedented benefits. Google is right that a balance should be found between the interests of the public and corporations.

Sensible Recommendations

The document contains sensible recommendations, such as suggesting that existing regulatory agencies develop guidelines specific to AI and to consider adopting the new ISO standards currently under development (such as ISO 42001).

The policy  agenda recommends:

“a) Direct sectoral regulators to update existing oversight and enforcement regimes to apply to AI systems, including on how existing authorities apply to the use of AI, and how to demonstrate compliance of an AI system with existing regulations using international consensus multistakeholder standards like the ISO 42001 series.

b) Instruct regulatory agencies to issue regular reports identifying capacity gaps that make it difficult both for covered entities to comply with regulations and for regulators to conduct effective oversight.”

In a way, those recommendations are stating the obvious, it’s a given that agencies will develop guidelines so that regulators know how to regulate.

Tucked away in that statement is the recommendation of the ISO 42001 as a model of what AI standards should look like.

It should be noted that the ISO 42001 standard is developed by the ISO/IEC committee for Artificial Intelligence, which is chaired by a twenty year Silicon Valley  technology executive and others from the technology industry.

AI and Security

This is the part that presents are real danger from the malicious use to create disinformation and misinformation as well as cyber-based harms.

Google outlines challenges:

“Our challenge is to maximize the potential benefits of AI for global security and stability while preventing threat actors from exploiting this technology for malicious purposes.”

And then offers a solution:

“Governments must simultaneously invest in R&D and accelerate public and private AI adoption while controlling the proliferation of tools that could be abused by malicious actors.”

Among the recommendations for governments to combat AI-based threats:

  • Develop ways to identify and prevent election interference
  • Share information about security vulnerabilities
  • Develop an international trade control framework for dealing with entities engaging in research and development of AI that threatens global security.

Reduce Bureaucracy and Increase Government Adoption of AI

The paper next advocates streamlining government adoption of AI, including more investment in it.

“Reform government acquisition policies to take advantage of and foster world-leading AI…

Examine institutional and bureaucratic barriers that prevent governments from breaking down data silos and adopt best-in-class data governance to harness the full power of AI.

Capitalize on data insights through human-machine teaming, building nimble teams with the skills to quickly build/adapt/leverage AI systems which no longer require computer science degrees…”

Google’s AI Policy Agenda

The policy agenda provides thoughtful suggestions for governments around the world to consider when formulating regulations surrounding the use of AI.

AI is capable of many positive breakthroughs in science and medicine, breakthroughs that can provide solutions to climate change, cure diseases and extend human life.

In a way it’s a shame that the first AI products released to the world are the comparatively trivial ChatGPT and Dall-E applications that do very little to benefit humanity.

Governments are trying to understand AI and how to regulate it as these technologies are adopted around the world.

Curiously, open source AI, the most consequential version of it, is mentioned only once.

The only context in which open source is addressed is in recommendations for dealing with misuse of AI:

“Clarify potential liability for misuse/abuse of both general-purpose and specialized AI systems (including open-source systems, as appropriate) by various participants — researchers and authors, creators, implementers, and end users.”

Given how Google is said to be frightened and believes it is already defeated by open source AI, it is curious how open source AI is only mentioned in the context of misuse of the technology.

Google’s AI Policy Agenda reflects legitimate concerns for over-regulation and inconsistent rules imposed around the world.

But the the organizations the policy agenda cites as helping develop industry standards and regulations are stacked with Silicon valley insiders. This raises questions about whose interests the standards and regulations reflect.

The policy agenda successfully communicates the the need and the urgency for developing meaningful and fair regulations to prevent harmful outcomes while allowing beneficial innovation to move forward.

Read Google’s article about the policy agenda:

A policy agenda for responsible AI progress: Opportunity, Responsibility, Security

Read the AI policy agenda itself (PDF)

A Policy Agenda for Responsible Progress in Artificial Intelligence

Featured image by Shutterstock/Shaheerrr

Category News Generative AI
ADVERTISEMENT
SEJ STAFF Roger Montti Owner - Martinibuster.com at Martinibuster.com

I have 25 years hands-on experience in SEO, evolving along with the search engines by keeping up with the latest ...