The construction of AI applications in the first place is secure: good practice guide for CISOS – opinion

Por David Branchler
Com 2 out of 3 organizations To use it regularly, and 1 of 5 Devopes professionals
If it is applied to the software development cycle, the AI adoption is deeply rooted in business solutions.
Without soft signs, this quick adoption is increasing serious concerns between the major information seconds – and exactly.
If you are developing its own AI product, using the AI tool to improve an service or set up a partnership with third party suppliers who rely on AI, security is a real puzzle. Can you trust the tools you are implementing? Do you know what models do your engineers use? Do you have confidence in AI behavior? Did you enforce the structural controls that prevent your from your manipulation by harmful agents? Will the application be safe even if the injection of prompts succeeds?
In most organizations, the answer to these questions is worrying and embarrassing.
- The tough reality is that the AI introduces unprecedented risks that are not made by most companies. Many important events have already shown this fact: Accidental escape of data Samsung
- By chatgpt A purchase of A Chevrolet Taho
- Only $ 1 by changing the chatbat Data Except Data from Private Channels
- Slack Bard’s error of Google
That price billion is 100 billion
AI’s exposure to the development of applications first reveals the critical need to adopt the security policy.
Although it seems to be an unknown territory, the basic principles of security are the same – their application is only different.
Here are some major risks to be taken into account: Statistical diversity –
We can tell you that traditional apps are always behaved in the same way, except for errors or exceptions. However, AI is the definition of statistical pattern, which means diversity in its behaviors. This means that we may have a general idea of how it behaves, but we will never be sure of the most sophisticated designs are constantly developing. In addition, stability can cause concern. AI models can respond in different ways to a single prompt, but one option must be selected – and this option may not always be the same, especially if the question changes slightly. And this is also without even considering the risk of deliberately changing the model that causes inappropriate responses. Considerations on the software supply chain –
Most third party business solutions are included without conscious users. If these tools are used to process sensitive, personal or protective data, it can cause a significant risk to the company. Do you know which systems of your business are in use? Do you believe that this third party suppliers will handle your data responsibly? What happens if their systems are committed? The chain of liability in the violation becomes very complicated for all parties involved. Data eternity –
In addition to understanding how AI models and their suppliers use data, after training, companies need to know that the data is permanently embedded in the model. With traditional applications, if an error occurs, you can ask the supplier to remove the data. However, in the AI model, the data used in training becomes part of the model and cannot be deleted. Lack of established rules –
Despite the new emerging regulations such as ISO/IEC 42001, NIST AI standards and the AI Act of EU, there are yet extensively accepted standards. This means that companies need to define their own standards and security controls, limiting knowledge and partnership at the industry level. Implications for copyright –
Currently, the US Copyright Office has shown that all AI-generated content is a public domain, if it does not provide significant human intervention. This means that companies that use logos, lessons or other content cannot legally protect them.
- How to build a safe frame for AI integration? Define a clear view.
- Before consolidating the IA, set the vision goals on how you want to “do” in your organization. What are you trying to achieve? How to protect the data, choose tools and what structures use. Because you have a clear understanding of what you are doing and have a comprehensive knowledge of the risks, it will only prevent it from implementing it. Develop a security reference structure. Create a document defining policy and standard protocols for the implementation of AI in your organization. This document must be establishedRules for security restrictions
- Integration controls and recommended methods. Models that can customize to suit your specific context are already available. Set up data management guidelines.
- Set the criteria to select models and track changes over time to ensure data proof. Understand how the models you use are treated for your data and if you are building your own model, set the procedures on using your customers’ data. Monitor the behavior of output and models.
- Since the AI models are unpredictable, it must create a process to verify that it is within the expected range. You can do manually through benchmarks, integrate the UX condemn button, capture and review outputs regularly or random model to check consent. Include security from the design.
- AII systems are structured approach to determining general security. Start by implementing the partition between data and code to ensure that non -reliable and non -reliable models are not directly interacted. Make a bullying modeling.
- Make sure that the DEVOPS team understands how AI changes threatening patterns and that the threatening modeling is undergoing exercise before implementing each project production. Do dynamic tests.
- Hire Red Teaming professionals to assess your AI models and applications in a comprehensive way. Most organizations focus on testing bias, but ignore the ability to abuse AI for harmful purposes. To be clear, I will allow you to have access to your harmful agent to delete your accounts or change your model rather than an insult. Train e valid.
AI is also an unknown territory for most experienced engineers. Bring AI Apps Security professionals to train your team about risks and defense strategies. Then verify consent with verification lists and integration review processes.
Although AI has brought incredible opportunities, organizations must safely balance innovations. By applying the basic methods and principles of security, companies can securely and constantly connect AI.
Technical Director, NCC Group
Source link