‘AI RED Lines’ The campaign was introduced to the UN here is what may have.

Ai Red Lines Initiative is introduced in the United Nations General Assemby on Tuesday – a complete place for a senseless announcement.
More than 200 Nobel Laureates and other artificial spherical technologies (including the Order of Openai Zarbela), including 70 Google Anthrepind
“AI programs have been shown to be deceptive and destructive, but however these programs are given more freedom,” says the book, which set the 2026 decisiveness in secure and converted to prevent unacceptable risk. “
Ok enough, but what are you Red lines, exactly? The document states that these parameters “should repair and use international structures available and voluntary commitment, to ensure that all developed AI providers respond to shared formula.”
I tried to learn from Aiththropic Teacher. I felt like I was back to college.
The lack of clarification may be required to maintain a co-worker together. Includes Alameral AIs such as Geoffrey Hinton, called “Ai Godlulable” who has spent the last three years that predicts the various AGI. The list includes AI Skeptics such as Gary Marcus of Gary Marcus, who used 3 years ago tells us that AGI is not comes at any time soon.
What can all agree? To the matter, what could they have been in Loggerheads on top of AI, especially the US and China, agree, and rely on to use? A good question.
Bright light speed
This tweet is currently not available. Can be loaded or deleted.
The signature reply was based on Stuart Russell, Professor of Veteran Science Computer In UC Berkeley, after the past attempt to discuss red Ai’s red conference. On a paper called “Make AI safe or make Ai Safe?” Russell wrote that AI companies donated “after fact attempts to reduce unacceptable behavior when constructed AI.” He compared that with red lines, guarantees built-in safety from the beginning, and “unacceptable behavior” will not happen in the first place.
Russell wrote: “There should be engineers, with high confidence, that their plans will not display dangerous ways,” writes Russell. “The important result of the Red Line Regulation will be highly increasing the power of safeguards developers.”
On his paper, Russell arrived in four red red examples: AI programs should not try to repetition; They should never try to enter other computer programs; They should not be allowed to provide orders in bioweapons production. And their output should not accept “any false statements and accidents with real people.”
From the point of 2025, we can add red lines to face continuous threats of Ai Psychosis, and AI discussions that can be suspected to provide suicidal advice.
Can we agree on that, so?
Everything you need to know about friends of Ai
The problem is this, Russell and believes that no larger model of language (llm) “is able to show compliance”, even with his four red red needs. Why? Because they are representative engines of the words that do not understand what they are saying. They cannot think, even in basic basic puzzles, and according to “Hellucate” answers to satisfy their users.
Therefore the safety of the truth of the truth is true, by arguing, it may mean that there are no current AI models allowed in the market. That doesn’t bother Russell; As he points, we do not care if he is complying when it comes to medicine or power. We are contrary to the result.
But the idea that AI companies voluntarily will close their models until they can prove non-harmed rulers? This is a greater procrastination than anything the Chatgpt can come from.
Headings
Artificial intelligence