“Hidden Artificial Intelligence” warns of major losses

The rapid spread of productive power and artificial intelligence attracted users and organizations. By 2030, the Artificial Intelligence Market is exceeding 26 826 billion. As TruSet adopts adoption, Gartner has reported that by next year, 30 percent of companies have reported that the traffic of its networks will at least automate, as Steu Simon has written*
Using apps that are not allowed
But here comes the speed factor. Since Artificial Intelligence Tools are widely available, employees use Artificial Intelligence -Originated Chat, Automatic Learning Models and Coding Assistants for Data Analysis (unauthorized by companies).
According to a study conducted by Microsoft, 80 percent of employees used unofficial applications last year.
“Hidden Artificial Intelligence” accelerates the speed of internal threats
And if 38 percent of consumers exchanged sensitive information with Artificial Intelligence tools without the company’s permission (they work), know the new threat
In the name of “Shadow AI”, it warns of large -scale security hazards, including data detection, improper work decisions and compliance issues.
Most employees connect artificial intelligence tools in their daily working daily. Marketing teams, for example, find magic “GBT” to automate social media and create photos. Financial business teams use artificial intelligence data to create patterns that are deeply examined by the company’s costs. Despite impressive results, these tools are not known to the use of applications, these tools are suffering from the risks of “hidden artificial intelligence”.
Instead of withdrawing the equipment approved from the company, when answering the customer’s inquiry, the customer service team that resort to non -declared Artificial Intelligence Chat Robots, a complex or misleading information that shares a secret or private data.
Security gaps
Without changing the nature of the work, hidden artificial intelligence replaces human work and gives artificial intelligence more independence, leading to new safety gaps. For example, under these hidden information technology conditions, the “Salesforce” program remains the same basic work, but the analyst can accidentally expose the sensitive information in this process to describe customer behavior based on a special data group.
Internal threats and sensitive data
Most information security officers (75 percent) believe that internal threats are dangerous than external attacks, as they have abused hidden artificial intelligence.
In March 2024, employees of Artificial Intelligence tools, sensitive data, one -third of the previous year’s growth in the previous year (27 percent). Customer support (16.3 %), then source code (12.7 %), research and development content (10.8 %), secret internal communication (6.6 %), human resource records and employees (3.9 %). It is also known that the artificial intelligence technology is also known to be harmful parties, including those who know the inside of the subjects.
To secure organizations from hidden artificial intelligence
You do not need to bury speed for safety. Companies can build innovations when protecting valuable data reducing the risk of hidden artificial intelligence.
There are 4 ways below:
1- Setting up a procedure for acceptable use of artificial intelligence: Their company’s policies for the use of artificial intelligence are “clear”, and 57 percent of them agree to use artificial intelligence contrary to the company’s policies. Consumers should be seen in types of information that can be entered or entering artificial intelligence tools.
Artificial Intelligence is prescribed to the classification categories: approved, limited to use and prohibited. Before publishing, all new artificial intelligence projects should be examined and approved by the Department of Information Technology. Policy should be considered as an important document that develops in response to new challenges and opportunities while maintaining its compatibility with the company’s safety policies.
Data processing
2- Setting clear requirements for data processing: By classifying data and identifying the types of information that are usually or especially through artificial intelligence offers. Data Detection Risk and Governance Application can be reduced.
Local artificial intelligence offers can be used for highly sensitive data; The data does not exceed the company’s limitations. Data can be published in instead of transferring to cloud artificial intelligence services; Where the data is found.
Safe training, understanding and culture
3- Continuous awareness training: One of the best ways to reduce the risk of hidden artificial intelligence is to educate employees about artificial intelligence and best methods.
More than half people (55 percent) are not trained in the dangers of artificial intelligence, and 65 percent of them are concerned about electronic crimes supported by artificial intelligence. Cyber security awareness training, supporting fraudulent hunting attacks, should be given priority to help consumers to determine artificial intelligence threats and respond appropriately to customers.
4- Building a safe artificial intelligence culture: Hidden artificial intelligence is technically more culturally. Building a culture responsible for artificial intelligence comes by increasing the level of awareness about the consequences of the use of unauthorized artificial intelligence tools. This awareness can be sought to search for alternatives passed or request advice from the IT department before applying new applications.
Companies must link all artificial intelligence systems and tools to prevent cyber braking, legal issues and consent issues.
By addressing the basic risks of hidden artificial intelligence, using artificial intelligence, providing consumers training and education, setting clear expectations on the use of sensitive data, and establishing a clear culture of artificial intelligence, organizations can improve artificial intelligence and achieve competitive advantage.
* “You”, “Tribune Media” services.