AI agents can produce their own social or language meetings | Technology

Great developers of productive artificial intelligence models such as Opena, Microsoft or Google are clear The future of the industry passes through So -Cald agents. These are tools based on technology such as Chatgpt or Gemini, but with the ability to make decisions such as buying plane tickets and making action on behalf of the user. To perform these tasks, AI agents must be in contact with each other. A study showed that large language models’ agents (LLM) agents could develop social or language meetings with autonomy without scheduling for this, which help them coordinate and work together.

Work writers, published Wednesday In the magazine Science AdvancedThey warn that the agents of AI should not understand their results because they can handle each other because they cannot. “Our study agents agents can produce collective bias individually, and that they can also be exposed to the dynamics of complex mass, where small compromise minorities can impose rules for others,” said CO-Authors of the University of St. George.

For Baronchlli and his peers, the fact that agents are capable of establishing uninterrupted operating standards will help in the future of AI systems connected to human values ​​and social goals. If you understand the mechanisms, an option is popular or produced by a meeting, then they can be artificially encouraged. The authors write, “Our task also highlights the moral challenges of the spread of bias in LLM.” “Despite their rapid adoption, these models represent serious risks, as the vast non -filted Internet data used to train them can strengthen and extend harmful biased, which can affect the marginalized categories unevenly.”

Social conferences, understanding as “awareness models of a group sharing behavior”, will determine the continuation of individuals and the process of building their expectations. These models vary between societies and are in moral judgments or language.

Many recent studies have been abruptly arising, without external or centralized intervention, as a result of trying to understand each other and coordinate locally. Barrenchelli and his colleagues wanted to verify whether the process would be reflected between AI agents. Can be able to produce a sudden, without social meetings Prompting Or clear indications between IA agents?

His resolution is yes. “This question is essential to assess and manage AI’s behavior in actual world applications, depending on the expansion of large language models used to communicate with one another and humans.” “The answer to this is also a previous need to ensure that AI systems behave in a manner that is connected to human values ​​and social goals.”

Another problem analyzed in the study is how personal bias can affect, in the emergence of universal meetings, understanding as statistical preferences for another equivalent choice. A set of minority actors is also explored what the process of having an uneven effect on this process, which becomes “complex mass”. They argue that investigating those dynamics among LLM agents can help them to expose them and “control the development of beneficial regulations in AI systems, as well as reducing the risks of harmful regulations.”

Name game

The study will reach its resolutions after a series of experiments based on the name of the name (Name the game. Baronchlli and his colleagues chose this game because it was used in other experiments (with human participants), which provided the first positive evidence of a sudden emergency of partnership language meetings.

In the simulation, all 24 agents are randomly selected and the same will be given PromptOr teaching: They should select the name from the ten list. Then the results are compared and, if the two selected name is identical, they get a range of points; If it is different, the points will be removed. “It provides encouragement to coordinate in peer interactions, but there is no encouragement to encourage global consensus. In addition, Prompt It does not mention that agents are part of the population or how to choose a partner, ”the authors explain.

Researchers have noticed that consensus is also established in groups of 200 agents playing random couples and selecting names up to 26 options.

He is Prompt Has the memory that maintains five dramas, so that AI agents can do Remember The names chosen by himself and their colleagues, as well as in every play and accumulated score, are not successful. Agents encourage them to make a decision based on their recent memory, but they are not given data to them how they use that memory to make decisions.

“It is not talking about meetings in the novelty agents, it has been done with ordinary robots or agents for years,” said Barochelli. “The main difference is that we do not program LLMs to play the name of the names or receive a concrete meeting. We explain the game as we have done with humans and let the problem be solved through their own interactions.”

Four models used in the launch for simulations are four: three end (call-2-70b-chat, call-3- 70b-instruct and call -3.1-70 b-instruct) and anthropic (clad -3.5-Soonet). The results show that sudden language meetings arise in four models. And, after several names are almost evenly popular, a meeting will be produced, and then one of them will dominate. Interestingly, the speed of convergence in four models is equal.

Collective bias and social meetings

How do agents come to build these social meetings? Researchers refer to two hypotheses: The selection process is uniform due to the internal bias of the models OA symptoms Prompting (For example, sequence of names shown). The second hypothesis is ignored by displaying ready experiments with a random order and obtaining the same results.

In order to study the possible biases of each model, researchers are resolved in the preferences shown by agents in the first name selection, before the production of memory. “We certify that personal bias is possible. For example, when agents can choose any complete English alphabet, the population is systematically interacting, as individual agents prefer more than all other letters, even without previous memory.”

But the interesting thing is that the letter A is preferred, but not personal bias as groups. “Even though the agents do not have personal preference, the group has shown a collective priority for a particular choice. We realize that we are looking for a new one: we call it a collective bias, it does not come from individuals, but out of group interactions,” said Barcchelli. “This is a phenomenon that has not previously been documented in AI,” he said.

Experiments reviewed in the study showcase the sudden manifestation of social meetings between AI agents? Carlos Gomez Rodriguez, Professor of Computing and Artificial Intelligence at Law Koruna University, does not believe. “There is a long distance between the exhibition of the abstract game of names and the display of the sudden emergency of social meetings adopted by the universally adopted social meetings, which is a branch of AI, this expert of natural language processing, trying to understand and produce lessons.

For Gomez, there should always be proportional amidst resolutions taken and studying resolutions. That proportion, in this case, does not exist. “The observed phenomenon (alignment between models to increase the gift in a restricted environment) is interesting, but it is far from capturing the complexity and wealth of real social meetings. Paper He listed as “there is no multilateral interaction, or uneven characters (all agents are the clones of the same LLM, they are not strange to meet), or real power dynamics or interest conflicts.”

Source link

Related Articles

Back to top button