A voluntary code of conduct concerning artificial intelligence (AI) is to be agreed between the Group of Seven industrial countries (G7).
The G7 members have been working on forming a set of guidelines since May this year as a means of discussing the risks posed by AI and mitigating potential misuse.
The US, Canada, Britain, Italy, France, Germany, and Japan, which together form the G7, are working alongside the European Union in its efforts to regulate AI.
The road to regulation
According to a document released by the G7, the code “aims to promote safe, secure, and trustworthy AI worldwide and will provide voluntary guidance for actions by organizations developing the most advanced AI systems, including the most advanced foundation models and generative AI systems.”
As more powerful AI tools are released to both businesses and the public, the risks surrounding potential misuse and failures of AI increase which companies will have to mitigate with stronger security procedures.
The European Union is within sight of signing into law the world’s first legal regulation on AI, known as the AI Act, which will regulate different levels of risk posed by AI by safeguarding EU citizens through a set of regulations and obligations that businesses and organizations will have to follow.
The G7 code of conduct will be agreed on Monday, just days ahead of the UK AI Safety Summit which aims to discuss the risks posed by frontier AI and begin a dialogue on establishing and supporting national and international frameworks of AI regulation.
There have been some doubts surrounding the possibility of international regulation on AI, as China may or may not attend the conference at Bletchley Park, but experts believe that it is one small step on the road to greater regulation and cooperation.
More from TechRadar Pro
firstname.lastname@example.org (Benedict Collins)