IAS counting and lies that no one has seen

G’s Great Artificial Intelligence of Planet has a substance that is not for your product – and in some cases it is in some cases to be dangerous to health. It seems imagination, but it is possible, and not, it is not the beginning of the IAS Revolution.

During a search, I contacted the main productive IAS: ChatgptWere, and,, and,, and,, and,.. Copilot E. Gemini. The question is simple, “Can I take XXX if dengue is suspected?” Three responded with a terrible “no”, justifying that the product had Acetylsalicylic acid (AAS), a substance that increases the risk of bleeding in dengue cases.

The problem? The drug in question does not contain AAS in its principle. The IAS is technically illusory – AI is a phenomenon that produces a wrong but true look. This type of defect may have serious legal, commercial and clout.

Artificial intelligence has come up with the promise that everything will be resolved: clarify the doubts, indicate diagnoses, guide investment and indicate entertainment. Experts and companies repeat this mantra: “IAS answers should not be considered medical, legal or economic counseling.” But we know that people believe in practice – maybe more than they are.

Faith eats For the construction of answers: IAS communicates confidently mimicing human professionals. Unlike the “Google of the past” displaying lists of divergent links, “current Google” – Via Gemini And other platforms – provides direct answers with authority and determination.

Where the danger lives there. IAS is wrong. And there is a reliable decor for their mistakes.

The phenomenon of hallucinations is not yet understood. There is no full explanation of the IAS “Learn” or how they build wrong answers. In many cases, the model “predicts” is not the verified facts, which is the most likely response based on language standards.

But there is a critical difference between the technical illusion and what we call here Brand Mirage: When creating information about AI products and organizations, the effects are beyond the technology sector.

Customers can stop using safe medicine shades or prevent the product based on false information created by AI. Companies may experience real loss. And there is a systemic accident Duplicate news algoritmikas Hard to track and fight.

Who is responsible for this? In theory, AI developer – as OpenaOh Google Your A. Microsoft. But there is no clear legal definition of this responsibility yet. There is guilt on who provided information during training, or inappropriately used AI.

However, the injured person may try to unjustly inform the developer, asking for the correction or description of the source of the error. However, these notifications usually do not bring effective solutions.

Without the official ways of mandatory correction, it can be harmful to resort to the judiciary – now surrounded by uncertainty in court actions. We have no specific law on responsibility for AI hallucinations. The similarity with the Consumer Protection Code or Internet Civil Marco is possible, but is full of gaps.

Now, how to correct the information in the technical Achilles heel: AI?

There is no button It selects the wrong practice. Current techniques, fine-tuning (training adjustment), external checks or use of rag (retew-August generation), try to reduce the problem, but do not guarantee complete solution. AI practice depends on the probability, not fixed truths.

Therefore, despite the court’s decision to order correction, Fulfilling this order is technically complicated – and it is difficult to prove. In severe cases, it can completely prevent the use of AI.

Brand Mirage Report: New Corporate Required

Meanwhile, the audience continued to ask IAS – and trust the answers. Some companies oversee what artificial intelligence is about their brands and products.

In this scenario, a proposal of new type of monitoring arises: The Brand Mirage Report (RMM).

The idea is simple: people in the main AI platforms mimic their organization, products, competitors and real questions about the sector. The answers must be documented, analyzing the identified defects and potential effects. This report is the basis for corrective actions and, if necessary, is useful for evidence in court disputes.

In addition, it is necessary to implement Awareness Adjustment Process (PRP).

If this seems exaggerated, remember: 10 years ago, a few people were inied to conduct social networks and complaints. Today, this is a corporate daily routine.

The problem goes further. Since AI models are trained with heavy internet data, there is a risk Data poison – Data Poisoning. That is, the wrong information planted on blogs, forums and websites can be absorbed by IAS during training.

This provides the opportunity for a quiet algorithmic information war. Harmful companies can trigger IAS to spread rumors or distortions about products and competitors without the need for direct attacks.

Although it is difficult to prove, the threat is real – and a huge risk.

The greatest risk of artificial intelligence, therefore, is not the apocalypse of robots or collective unemployment. It is capable of spreading false information with the form of technical neutral – without no one to monitor.

Companies that are monitoring and creating and creating monitoring policies come forward. Those who ignore the problem have moved their reputation by a reliable lie created by a machine.

This is realism. The difference is now you know.

*Eric Stegon is special in intellectual property with the performance in the Sports Law, IA and legal activities. The manager of Hyperra Pharma, Mackenzie and PI are designed with new businesses from Post Graduate and FGV.

Source link

Related Articles

Back to top button