
On Monday, at the United Nations General Assembly, a coalition of scientists joined former world leaders to launch the Global Call for AI Red Lines, urging governments to establish a universal treaty by 2026 for AI governance solutions.
The 2026 universal treaty will work on banning “universally unacceptable AI applications, seeking to prohibit high-risk uses of AI, such as controlled nuclear weapons, irreversible mass surveillance, and systems that evade human shutdown.
The treaty of AI usage policy frames such AI applications as fundamental threats demanding “immediate” international consensus.
Drawing the Line on AI
The initiative of AI safety laws, which was endorsed by former leaders like Ireland’s former President Mary Robinson and Columbia’s Juan Manuel Santos, includes the support of top AI researchers Geoffrey Hinton and Yoshua Bengio, also known as the “Godfathers of AI.”
Their collective message is critical: some applications are simply too dangerous.
Examples of possible red lines include stopping AI from controlling nuclear weapons, using AI for mass surveillance, or building systems that are hard to shut down by humans. These reflect the first steps toward an AI acceptable use policy on a global stage.
The appeal states the only thing the group offers in concrete terms, is that any international agreement should be constructed on three pillars: “a clear list of prohibitions; robust, auditable verification mechanisms; and the appointment of an independent body established by the Parties to oversee implementation.”
This framework shows how AI governance solutions could ultimately become an AI policy for business, not with specifics, but with principles that can be applied across borders and industries.
If AI Ruled the World
While the US has already pledged to keep AI out of nuclear decisions, significant political hurdles remain regarding AI governance solutions, such as resistance from intelligence agencies opposed to limits on surveillance.
Washington’s spy agencies are upset over limits that restrict AI use in surveillance, evidence of the lack of AI governance. Witnesses claim that this uncertainty risks precipitating AI policy stagnation, with governments lagging while technology surges on.
Meanwhile, scientists call for clearer AI alignment policy and legislative AI safety interventions to be implemented to ensure misuse is prevented. Others refer to the need for AI transparency laws and stronger government regulation to hold both states and corporations accountable.
Many experts have even criticized current debates as bordering on satire AI regulation, with policymakers just talking round the circle as real threats spiral out of control. However, momentum is building, and most believe that the cry for government AI oversight is the most essential step towards ensuring innovation is balanced with accountability.
For the moment, the Global Call for Red Lines for generative AI policy is a warning and an appeal. Without clear rules in place, AI can shape the world as we are no longer able to control.
Inside Telecom provides you with an extensive list of content covering all aspects of the tech industry. Keep an eye on our Tech sections to stay informed and up-to-date with our daily articles.