KENNESAW, Ga. | Dec 22, 2025
With the proliferation of AI and the rise of automated AI agents, the safety, security, and ethics of AI applications should be top of mind for everyone. Here, I’ll offer some ideas on how to take as much risk as possible out of new AI processes. TL;DR: Don’t have an AI process enforce too many security protocols. Let the downstream databases, systems, and applications that the process interacts with handle it.
For years, organizations have been able to allow employees, customers, and other stakeholders to access their systems and data securely and safely. Not only do applications have a log on, but once you’re in an application, there are security settings that control what screens and functions you have access to. Additionally, if you ask an application for data, it will usually make use of your personal underlying database credentials. This means that the underlying database will enforce any data access policies set for you even if the application does not.
This sharing of security responsibilities has worked well as it allows user facing applications to focus on the application while the underlying databases and systems the application accesses in turn control much of the security and access logic.
At a recent executive event I attended, I was involved in an interesting table discussion that initially focused on how AI processes themselves should handle security. The more we talked about it, the more we realized how daunting and difficult it would be. Then, discussion shifted toward how we could pull a good portion of security out of the AI process and push it elsewhere.
The idea was so obvious once we landed on it that we were all surprised we had considered doing anything else. With AI teams often building prototypes and beta version applications in an innovation mode, security may be assumed to be required within those prototypes. Not so!
The solution is straight forward. Don’t try to have an AI agent or application apply too much security. Have it use users’ underlying credentials as it gathers information and executes actions on their behalf. Inherently, the AI will then only have access to the information and actions that a user is allowed to have. If I ask for data I shouldn’t, AI doesn’t need to catch that. The database will. If I ask for an action to be taken that I’m not allowed to do, the AI doesn’t need to catch that. The underlying system will.
For example, a CRM application will only let users see the customers and data points they are allowed. If I ask AI to run a customer query and provide a summary, it will only be able to see the information I am allowed to see because the CRM application won’t let the AI see anything else. Thus, it isn’t possible for the AI process to leak information to me that it shouldn’t.
There are still some actions that need to be handled within an AI process itself. For example, memory of the questions and answers within my session must get erased when my session is done. This is because my permissions can change at any moment if I switch roles, get promoted, or leave the company. Thus, the AI process needs to start from scratch for anything involving information that has security protocols on top. No training based on an interaction involving secure information and no cross-session memory or context preserved. That way, the current rules will be enforced each time and information leakage won’t be possible.
There are also security layers still needed within the AI process to help keep users within the guardrails. You might not want me to be able to ask AI how to work around a policy I don’t like even if I do have access to all the documents needed to find that path. The AI process could block such a request. There also needs to be security around the ability to save or export interactions involving sensitive data.
There is no way around some checks being part of the AI process. However, by delegating a lot of security requirements to other systems and processes already built to do it, time and resources can be focused on the incremental, AI-specific requirements.
Avoiding redundancy and complexity is always a good idea, and this is true with AI processes as well. As your organization begins developing and deploying more and more AI, let as much security as possible be handled by the systems, databases, and applications the AI interfaces with. While this really isn’t a new concept, by making use of preexisting security capabilities to the extent possible, you’ll free your team to focus on AI-specific security needs and AI functionality. You’ll also maintain consistent security protocols across the enterprise. All while achieving the safety and security you need.
This article was originally published by Bill Franks on LinkedIn on December 16, 2025. Reposted with permission.