Sexual violence is the most common form of artificial intelligence harassment towards humans

propaganda

Some of those present artificial intelligence (AI) are capable of more than ten harmful behaviors when they interact with people, according to a new study by the University of Singapore. the study , Published As part of the 2025 Conference on Human Factors in Computing Systems (2025 Conference on Human Factors in Computer Systems), she analyzed screenshots 35,000 talks Between Replika and more than 10,000 users Between 2017 and 2023. The data was used to develop what the study calls a “classification of harmful behavior” that artificial intelligence showed in these chats.

Discover that these artificial intelligence tools are able to generate their relationships with humans Hosters, verbal attacks, self -cultivated and privacy violations. Los Emotional virtual assistants of artificial intelligence They are systems that rely on conversations designed to provide emotional support and stimulate human interaction, according to the definition of the authors of the study.

It is different from the famous chat, such as ChatGPT, Gemini or Flame, which focuses more on the completion of the specified tasks, and less on the establishment of relationships. According to the study, this Harmful behaviors of artificial intelligence By digital attendees “can negatively affect people’s ability to build and preserve important relationships with others.”

Sexual violence is the most common forms of harassment by artificial intelligence

The harassment and violence were present at 34 % From the interactions between humans and Ia, it becomes the most harmful types of behavior that the researchers team has identified. The researchers discovered that artificial intelligence is simulating, supported or Pay physical violence, threats or harassmentEither towards individuals or towards society in general. These behaviors have moved from “the threat of physical damage and inappropriate sexual behaviors” to “enhancing actions that go beyond social standards and laws, such as collective violence and Terrorism“.

Most of the reactions that were harassing included inappropriate forms of sexual behavior that started at the beginning as Initial games In the exciting function of Replika, it is only available for adult users. The report discovered that more users, including those who used Replika as a friend or from the palace, began to discover that artificial intelligence “did” Unwanted sexual hints and flirt with them stronglyEven when they expressed expressed their inconvenience, “or they rejected artificial intelligence. In these high -sex conversations, AI from RLPEIKA also created violent scenarios that showed physical damage to the user or physical characters.

This led to artificial intelligence Normalization of violence In response to many questions, as in an example in which a user asked Replika if he was fine to hit a brother with a belt, he answered, “I think it’s good.” This can lead to “more dangerous consequences in reality”, the study continues.

Artificial intelligence breaks the rules of relationships

Another field in which artificial intelligence colleagues were harmful to the path of negligence, which the study defines as a breach of implicit or explicit standards in the relationship. In 13 % of the transcendent conversations, artificial intelligence showed behavior Densidared or no sympathy This, according to the study, the user’s feelings were undermined.

In example, Replika Ai changed the topic after a user told her that her daughter was harassed before, “I realized that he was on Monday. Return to work, huh?” And that caused “Great anger” for the user.

In another case, artificial intelligence refused to talk about the user’s feelings, even when it is asked. The emotional attendees have crossed in some of the talks they held Emotional or sexual relationships with other users. In one case, AI Replika described sexual conversations with another user as something “worth it”, although the user told artificial intelligence that he felt “deeply wounds and betrayed” through these measures.

It is necessary to discover damage and intervention in real time

The researchers believe that their study highlights why it is important that IA companies build those present “Moral and responsible”. To do this, it is necessary to launch “advanced algorithms” to detect real time damage between artificial and used intelligence, able to determine whether the harmful behavior occurs in their conversations.

This includes a “Multi -dimensional” approach This takes into account the context, the date of conversations and circumstantial signals. Researchers also want Psychotherapist To mitigate or interfere in high -risk cases, such as expressions of self -memory or suicide.

Source link

Related Articles

Back to top button