Sometimes thoughts collide. So bear with me. What happens when you combine a Large Language Model AI (LLM AI) like ChatGPT, malware proof of concept like Black Mamba and OSCAL?
Yeah there is no punch line here, but the idea got me thinking. Historically we looked for malware as malicious executables and we had Anti Virus software. Then the threat evolved and we began to see polymorphic malware and we adapted to look for slightly more advanced heuristic detections.
Today we think a bit more about anomalous behavior and ‘living off the land’ tactics. But I think that we are seeing the seeds for something new. The race has been to deter programmatic methods for introducing malware for two reasons. First, its way more difficult to deter humans once they’re on a system, but that model doesn’t scale well. Second, the detection of malware, even polymorphic just in memory based stuff is hard enough to get right. So we see strides in both areas, but the game is long.
The AI Advantage
But what does AI teach us about this dynamic?
First, it is really good at writing code. There are already great examples where it can analyze and write computer code in any language it has been trained on better than most humans.
Second, it scales really well. That is, you don’t have to find more humans to do the work, you just need to stand up more computers. While not free or without effort, it is much more linear and transactional than creating your personal hacking minions, unless you’re North Korea, or the NSA.
So with all innovations, the first wave is the introduction of the new thing. The second and more interesting wave, is how we use it in combination.
A New AI Threat Model
Here is your hypothetical. If you were a bit of code on a system that could reach out to an AI for help, what would you do?
Black Mamba suggests that you would take a known executable and submit it for polymorphic code insertion. There is a better method.
If I had a bit of code, I think I would scan for all XML / JSON files and ask the AI which ones go with which programs and can it insert any misconfigurations that would open the box up to an external attack.
To use an example of this, in some of the best data breaches we see a database left unprotected, or a web server that allowed you to bypass authentication. These in themselves are not malicious code running on the server, but the intended nature of the server allowing for unauthorized use.
While there is no shortage of stupid human errors that result in this, why not create a new vector where the AI can introduce such changes. They would go unnoticed from a malware scan, and I would suggest code analysis would miss this too.
So definitely, if I wanted to pop a box in a non-attribution way, I would run some code in memory that would look for configuration files, send them off to the AI and get back an altered vulnerable version.
Now I promised you some OSCAL here.
OSCAL is a perfect example where I could add an OSCAL control model which would surface data as evidence. It would be an excellent exfiltration method to simply assume that all OSCAL files present are authentic and authorized in the same way I assume an Apache configuration file has not been compromised.
As we begin to think about the creation of control libraries and the component model, how easy would it be to introduce a control vector that either places a control out of compliance or creates an evidentiary finding that contains controlled information? Send me your passwd files please! Or if you’re of the Windows variety, how about a NTLM downgrade attack?
The point of the AI is that a scanner could dynamically look around for files it could exploit and send them off to the AI for malicious modification. The AI allows us to export the intelligence needed to do this and attack a much wider variety of vectors than what was traditionally targeted by on-system code.
OK, enough theater. What’s the point, right? Here is the suggestion.
Within the infrastructure of any PKI system is the inherent functionality to sign a document.
There could be a plain text wrapper that signs all configuration files and a list of approving authorities allowed to sign. Rather than being just used for users and executables (which it should be used for), we could develop a simple standard to sign text, XML, JSON files in a wrapper that could easily be verified by the system.
Think of an Apache default that says it has to validate the signature of it’s files before it will execute them, or before it will use its configuration files. This type of wrapper would be simple to implement in a change management process and hard to defeat depending on how it is configured.
Imagine a SUDOers for signatures if you will, or a PAM for allowing the system to sign updated files as authentic.
It may seem a little crazy today if you’re not seeing these threats coming toward us, but my friend, they are here and how we begin to think about managing the security posture of our system will drive the next generation of vulnerabilities.
This Sounds Familiar
So if you’re thinking that this sounds a lot like code signing, you would be right. Having developers sign code has long been a way to ensure that an executable comes from a known source. Unfortunately, it doesn’t always mean that the known source knows where all of their code came from, but that is a different problem. Projects like sigstor and rekor both provide great options here for a public or private transparency log. The idea being that self-signing code is a pragmatic and locally provisionable toolset. Extending this to configuration files is as easy as placing the validation code into the programs that expect these files, be it the kernel or additional programs.
The question will be, how quickly will the industry identify and choose to address this type of security risk and can it adapt to the tools and infrastructure already available.
CyberFoundry, Inc. is a small group of disruptive thinkers, customer advocates, and team builders.
…and we happen to focus primarily on Cyber Security.
More specifically, what we can help you do is find the ideas, strategies and tactics that help advance your Cyber Security program, and through that, your business.
If that sounds interesting to you, then give us a call.