Tech News

Opena gives us the idea of ​​how to monitor misuseing in Chatgpt

Opelai’s recent AI report emphasizes the TIFTROPER that AI companies travel between preventing their discussions and users that their privacy is that their privacy is respected.

The report, who threw several charges when opening and interrupting their models, focusing on their models, focused on the Scams, Cyberattacks and government campaigns. However, it arrives between the growth among the other kind of danger of AI, the injurious danger of Chatbots. This year only several reports of users work for self-injure, suicide, and murder after communication with AI models. The new report, as well as the disclosure of the previous company, provides additional understanding of the Opelaai that opens talks about different types of abuse.

The Opena said since she began reporting public threats on February 2024, it disrupted and reported more than 40 networks that violate the use policies. In today’s report, the company shares a new lesson from the previous quarter and details of how it gets the abuse of its models.

For example, the company produced a systematic crime network, reportedly held a Cambodia, who tried to use AI to direct its flow. In addition, the effectiveness of the Russian political political performance is reported that Chatgpt is applied to produce video motives for other AI models. Opena also announced the accounts associated with Chinese government who contravenes their policies in national safety, including producing proposals for major programs designed to monitor social conversations.

The company previously reported, including its privacy policy, which uses personal data, such as user’s encouragement, ‘to deceive, illegal activity, or misuse of’ services. Opena has also said it depends on both default programs and human reviewers to consider the work. However, in today’s report, the company donated a little understanding about its thinking process to prevent misuse and prevent users.

“Receive and disrupt successful threats without interruptions of daily users, using a constructive and knowledgeable approach to the joint modeling patterns,” a company in the report.

While national security breakdown is one, the company just explained how dealing with the harmful application of its models by users dealing with emotional or mental tribulation. More than a month ago, the company published a blog post explaining how it affected these types of situations. The post office comes between the closure of the violence of reporting reported that it is linked to Chatgpt cooperation, including suicide of Connecticut execution.

The company said that when users wrote that they wanted to injure themselves, they were trained in non-compliance with the user’s feelings and directed them to actual land.

When AI finds a person who arrives for others, the discussions are rapidly imprisoned in human review. If a person review decides whether a person represents the threat to others, they can report to the law.

The Opena also acknowledged that its Model Security performance may be deprived of several user interacting and it was already working to improve its protection.

Related Articles

Leave a Reply

Your email address will not be published. Required fields are marked *

Back to top button