Increasing amounts of regulation are creating an issue for businesses as they seek to ensure compliance whilst still delivering on their core activities. This is leading many to boost the size of their security teams.
We spoke to Jay Trinckes, CISO of Thoropass, who believes that using AI, with its ability to analyze vast amounts of data quickly and accurately, will be key to bridging this gap without the need for massively expanded staffing.
BN: Why do growing compliance requirements present such an issue for businesses?
JT: With ever-increasing compliance requirements comes ever-increasing complexity for businesses. A business may already have a compliance program that staffers established with a specific framework in mind, but when new compliance frameworks are introduced, they may need to rework the entire program. Businesses may not be ready to take on the changes or have the resources to implement all of the controls that may be required by the new compliance frameworks.
Specifically, many companies don’t have the funds or the staff. I think it’s pretty well known that cybersecurity and compliance teams are chronically understaffed across the United States. We shouldn’t expect that businesses will be able to instantly pivot to meet new requirements when they don’t have the in-house expertise to meet the ones that have been in effect for years.
The funding and staffing gaps have put businesses in a difficult spot because they are also having a hard time hiring new staff or paying for traditional third-party cybersecurity vendors. The result is that 2024 has been one of the worst years for breaches of all time.
BN: How can AI help to simplify the compliance process?
JT: AI is good at identifying patterns and organizing large datasets. This ability could help security teams by analyzing various compliance frameworks and identifying overlapping requirements. For example, if multiple frameworks require specific security or privacy measures — such as encryption protocols, access controls, or data retention policies — AI can highlight these commonalities and suggest a unified approach to meet them.
AI can also rapidly analyze large volumes of data to detect inconsistencies, risks, or gaps in compliance, providing insights for security teams to prioritize their work effectively. This kind of automated analysis can significantly reduce the time and effort required for compliance tasks, freeing up resources for other critical activities.
BN: What are the risks of not integrating compliance with AI?
JT: One thing companies need to remember is that the bad guys are already using AI to launch around-the-clock automated attacks, phishing emails, and fraud. So if you’re not using AI to at least simplify compliance, you’re going to have an even bigger gap between your cybersecurity systems and the types of threats you will have to face.
Without AI, compliance processes will have to rely heavily on manual methods, which are prone to human error and inefficiencies. Failing to use AI to its full potential means companies may misallocate resources, focusing on low-priority issues while leaving critical compliance gaps unaddressed.
Also, AI can automate threat detection and compliance monitoring to allow for faster responses to emerging risks. So, leaving AI behind means organizations risk slower reaction times. Without this ability, a company’s compliance efforts are often reactive. The organization is left with addressing security issues only after attackers have exploited them, which can be costly and damaging.
Lastly, relying solely on manual or traditional methods for compliance often requires significant human effort and time, which goes back to the original problem of not enough funds or staff. Integrating AI simplifies a lot of processes, saving costs and freeing up resources. So, without AI, businesses face higher operational costs and a greater risk of inefficiency.
BN: Could AI eventually replace the need for compliance teams?
JT: No, I don’t see AI replacing the need for humans in compliance. Although AI may be able to perform routine tasks more efficiently because it can monitor and spot patterns in highly technical environments, its output is still prone to errors and hallucinations, which means compliance teams need to validate or verify.
I also think it’s important to understand that, while AI is fantastic for monitoring and detecting vulnerabilities, it’s not so great at incident response. For the foreseeable future, it will be humans who react to attacks, take the steps to protect the most important systems, and notify affected individuals afterwards.
BN: What are some of the practical steps organizations can take to incorporate AI into their compliance process?
JT: Organizations need to define the objectives of the project to which they have assigned AI. Then they need to train and fine-tune the AI tool to ensure it is up to the assigned tasks. This fine-tuning is really important because a lot of folks are trying to use generic AI tools like ChatGPT or Microsoft’s Copilot to help simplify compliance. And these generic tools may have security and compliance gaps of their own.
Fine-tuning starts with having the right data and configuring AI systems to align closely with the requirements of specific compliance frameworks. This involves setting strict parameters and boundaries within the AI tool to ensure that its outputs are accurate, relevant, and compliant with the defined rules and standards of the targeted framework.
Next, companies need to create feedback loops and monitoring systems to improve the accuracy of AI outputs. Create a reporting feature so that developers and users can always give direct feedback on what is working and what isn’t. Keep track of errors and audits. And finally, your use of AI needs to be transparent and follow responsible/trustworthy principles. IT teams should be aware of exactly how the company is using AI in the compliance process and any risks or liabilities involved.
Image credit: BiancoBlue/depositphotos.com