Tech News

Character.ai: No more youth chats

Character.ai, the popular Chatbot platform where users play with different account representatives, will no longer allow account holders under the age of 18 to have open chats, the company announced on Wednesday. It will begin to rely on age verification techniques to ensure that children cannot open older accounts.

The dramatic reversal comes just six weeks after character.ai was sued in federal court by several parents of teenagers who have died by suicide or alleged serious harm, including sexual abuse; Parents say their children’s use of the platform has led to injuries. In October 2024, Megar Garcia filed a wrongful-death suit seeking to hold the company responsible for her son’s suicide, arguing that its product was defective.

Internet security providers have declared character.iai unsafe for young people after testing the platform this spring and logging in with hundreds of affiliates, including violence and sexual exploitation.

After facing legal pressure last year, character.ai introduced parental controls and content filtering in an effort to improve youth safety.

BREAKFUT:

Character.ai is not safe for teenagers, experts say

In an interview with mashable, actor.ai’s CEO Randeep Anand described the new policy as “Bold” and denied that it would reduce open Chatbot conversations.

Instead, Anand framed the decision as “the right thing to do” in light of broader unanswered questions about the long-term effects of Chatbot marriage engagement for young people. Anand referred to Opelaa’s recent admission, after the suicide of a young user, that longer negotiations could be expected.

Anand Cast.ai’s new policy as standard settings: “I hope it puts everyone on a path where AI can continue to be safe for everyone.”

He added that the company’s decision will not change, regardless of the user behind the user.

What does character.iai look like in this decade?

In a blog post announcing the new policy, character.apologise to users of its youth.

Mashable Treend Report

“We do not take this step to remove the discussion of an open character – but we think it is a fair thing to contribute to the questions raised about how young people can, and should, interact with this new technology,” said the blog post.

Currently, users aged 13 to 17 can send messages through Chatbots on the platform. That feature will cease to exist before November 25. Until then, accounts registered to children will face time limits starting at two hours per day. That limit will decrease as the transition away from open negotiations approaches.

Character.iai will see these notifications about the changes that are coming to the platform.
Credit: Courtesy of Crance.i

Although open chats will disappear, the chat history of teenagers and certain chatbots will remain smart. Anand said users can draw on that story to generate short audio and video stories with their favorite chatbots. Over the next few months, character.iai will also explore new features such as gameplay. Anand believes the emphasis on “AI Entertainment” without open discussion will satisfy the interest of creative youth in the platform.

“They will play, and they will entertain you,” said Anand.

He insisted that existing chat history with sensitive or banned content that may not have been previously available through filtering, such as violence or sex, would not find its way into new audio or video content.

A character.iai spokeswoman told Mashable that the Train Trust and Safety Team reviewed the report published in September by burning written fire accounts. The group concluded that some discussions violated the platform’s content guidelines while others did not. It also attempted to replicate the report’s findings.

“Based on these results, we analyzed some of our classifiers, in line with our goal for users to have a safe and inclusive experience on our platform,” said a spokesperson for the agency.

It doesn’t matter if.ai will start issuing age guarantees soon. It will take a month to start working and it will have many layers. Anand said the company is building its authentication models in-house but that it will partner with a third-party technology company.

It will also use relevant data and signals, such as if the user has a verified account of 18 on another platform, to determine the age range of new and existing users. Finally, if the user wants to challenge the age.iai challenge, they will have the opportunity to provide verification through a third party, who will manage sensitive documents and data, including state identification.

Finally, as part of the new policies, character.ai is establishing and funding an independent venture called AI Safety Lab. The lab will focus on “novel security techniques.”

“[W]”E wants to bring industry experts and other partners to continue to make sure that AI continues to stay safe, especially in the field of entertainment AI,” said Anand.

Related Articles

Leave a Reply

Your email address will not be published. Required fields are marked *

Back to top button