The G7 will define an AI code of conduct for business

The G7 must agree on a code of conduct for companies building advanced artificial intelligence (AI) systems, according to G7 documents.

The news comes ahead of the UK’s AI Safety Summit on November 1, where global leaders and businesses will look at getting ahead of the fast-growing technology, hoping to reduce its risks and misuse.

In a paper reviewed by Reuters, The G7 said its 11-point code of conduct “aims to promote safe, secure and reliable AI globally and will provide voluntary guidance for the actions of organizations developing the most advanced AI systems.”

This includes state-of-the-art foundational models and generative AI systems, the document added.

The G7 urges companies to exercise caution in AI development and input measures to identify, assess and mitigate risks throughout the technology’s lifecycle.

Earlier this month, European Commission Vice President for Values ​​and Transparency Vera Jourova said the new code of conduct would act as a bridge until the regulation is properly implemented.

Access the most comprehensive company profiles on the market powered by GlobalData. Save hours of research. Gain a competitive edge.

Company profile – free sample

Thank you!

Your download email will arrive shortly

We are confident in the unique quality of our company profiles. However, we want you to make the most beneficial decision for your business, so we offer a free sample that you can download by submitting the form below

From GlobalData

The EU is leading the way with its AI law introduced in June 2023.

Michael Queenan, CEO and co-founder of UK-based data services integrator Nephos Technologies, believes the code of conduct will be difficult for the G7 to agree on and should focus on areas that can add the most great value.

“It will be difficult to get world leaders in the G7 to agree on their behavior when there is so much at stake here,” Quinan said Judgment.

He added: “Whoever gets there first has enormous power and economic opportunity. The more advanced AI capabilities a country has, the more they can advance their society, but at the same time, with power comes danger.”

Queenan said recent discussions around AI safety policies have focused on what models are safe to use. However, he believes they should discuss which areas he can add the most value to.

“A code of conduct should be about how to protect people while doing this,” said Quinen, “railings are not the same as safety, but they will ensure that those who use and design AI models do so responsibly. “

Laura Petrone, an analyst at research firm GlobalData, said countries such as Japan and the US have taken a hands-off approach to AI regulation.

Petrone believes it is “critical” for the G7 to include all these different views and approaches in a code of conduct.

“However, the goal of such an initiative should also be to frame the discussion around how best to regulate AI without stifling innovation,” Petrone added.

“Identifying the risks of AI alone is not enough, and the European Commission’s chief digital officer, Vera Jourova, is right to point out that this code should act as a bridge until regulation is in place,” she said.

White House to take action against AI

On Monday, US President Joe Biden announced that his administration would take action against AI. This follows the UK announcing the launch of the ‘world’s first’ AI Safety Institute ahead of hosting the AI ​​Safety Summit on 1 November.

Biden’s executive order will seek to set parameters around emerging technologies, as the US has so far taken a relatively hands-off approach.

The order would require developers of AI systems that could potentially pose a risk to US national security or public health to share the results of their safety tests before it can be released to the public.

The tests must be sent to the US government and will have to remain in compliance with the Defense Security Act.

The new rules also call for the installation of “best practices” to address the effects that artificial intelligence can cause in job and worker displacement.

Biden’s new orders go further than voluntary commitments made by OpenAI, Alphabet and Meta earlier this year. Leading AI companies have agreed to watermark all AI-generated content to quell fears of misinformation and abuse.

Bruce Reid, the White House deputy chief of staff, described the new order as “the strongest set of actions” any government has ever taken on AI regulation.

“This is the next step in an aggressive strategy to do everything possible on all fronts to harness the benefits of AI and reduce the risks,” he said.

Leave a Comment

Your email address will not be published. Required fields are marked *