If you have ever tried to get the government to use a computer system, you may know that getting approval is long and expensive.
In the relatively modern world of cloud computing, getting an Approval to Operate (ATO) for a computer system under the Federal Risk and Authorization Management Program (FedRAMP) can take 6+ months and just under a million dollars, not to mention all of the engineering costs to get that far.
The Government Services Organization has a team of outsiders called 18F. It emerged from President Obama’s Presidential Innovation Fellows (PIF) program that brings outsiders into government to handle some of its most challenging problems. The team focused on this problem and believes it can get the 6 month process down to 30 days.
Let’s take a look at some of the key deliverables they believe will bring about this change.1
- Reduce the complexity of the system.
This reflects modern architecture, where systems are often decomposed into smaller functions. Still creating a ‘complete’ system function, these are being looked at more compartmentalized than in prior efforts.
- More focused process.
This is looking at the effect of targeting a shorter time period also means that people are not waiting for the next step, which can reduce switching costs from multitasking. A logical step if the waiting time can be reduced.
- Use consistent tooling and processes.
This generally refers to adopting standardized architectural patterns and using standard components so that every assessment doesn’t start like it’s the first of its kind. We’ll get more into this point.
- More inheritance.
This speaks to the point above, but the 18F team points out that this is where they are beginning to think about interdependence. For example, relying on an external Identity, Credential, and Access Management stack or other cloud components can prevent re-evaluating those elements for each ATO.
- More integration between security and project teams.
This makes sense. We often talk about shifting left, but understanding what that means to project teams can be elusive. This couldn’t be clearer when looked at with the other objectives.
So let’s get into how this is being done and what the next steps are so important to achieving this vision.
So, what is the Plan?
To bring these threads together, the government is looking for smaller systems that reside in the cloud, with clear dependencies on common services, using common architectural components, with project and security team integration during the entire lifecycle.
Most of this vision is achievable through the use of good architectural practices for the cloud and certainly deserves more attention than we can give it in one article. So, there is a lot we could talk about here, but let’s focus on one idea and how it could play out.
The NIST developed OSCAL (Open Security Controls Assessment Language) to help. The standard creates a language for discussing security controls that computers can understand and models the behaviors of security professionals, software developers, and assessors alike. It creates a way to have the conversation in a meaningful way throughout the lifecycle of a computer system – from its early development through operations.
Moreover, it creates a decentralized approach where every component in a system understands its security posture. This approach means that complex systems, especially those where dependencies are sometimes delegated and no longer visible, are individually responsible for implementing, attesting to, and reconciling any security concerns.
This sounds too good to be true, right?
The friction inherent in how we do things today that this removes is human in nature, and it comes from two different sources.
- Computer systems continue to grow in complexity. Human-oriented processes tend to start as ones that a single person can own and operate. As they grow, it requires decentralization, and individual team members don’t often see the entire process or how their contribution fits into the larger picture. Computer systems, as designed by humans, follow this philosophy. We tend to create larger and larger systems until no one person can understand it all. The system itself may not perform optimally because areas of the system can be unmonitored or uncontrolled.
- Humans as architects tend to define our understanding of how computer systems work in human terms, using language that is easiest for us to understand. However strident our efforts, computers don’t understand human language and all of the intent, context, and meaning that it implies, as well as they know computer code. Creating large sets of human-centered documentation to define all aspects of computer systems when no human can understand the system in its entirety makes for unavoidable gaps as the process overtakes the process owners.
The result of this is a failing human-centered design where an overwhelming computer-centered problem exists, and the result is overwhelming friction.
The real world shows that the vast majority of computer system breaches emerge from vulnerabilities that could have been easily remediated had the risk been adequately understood and acted upon.
In essence, our ability as humans to keep it all in our heads and “do all the things” is limited, and our results show that.
This sounds bleak.
The GSA 18F team recognizes these facts and provides a general set of principles that in their lab can produce a great result, but how does this implement across the larger ecosystem?
This is where the NIST OSCAL program comes into play.
OSCAL focuses on taking the human-centered security documentation inherent in all systems today and making it understandable as computer code.
It starts with requirements.
At its base, the OSCAL standard allows standards like NIST SP800-53 and its profiles to be modeled inside of OSCAL.
From a practical perspective, OSCAL is being given the list of expected security standards.
Second, and this is the tricky part, OSCAL allows a programmer to define how each of these security standards will be met in a way that the computer can understand, implement, assess and remediate. This is truly a great benefit because before, a human would have to make notes on how this is done and go back and look for the evidence manually.
It can’t be overstated that this feature is indeed the lather, rinse, and repeat function that makes this a transformational technology.
Lastly, OSCAL supports automated assessment and reporting. While some of this can be done automatically, there may be a need for human intervention, and OSCAL supports this through a familiar process called POA&M or Plan of Action & Milestone. The relevance here is that this is the process that humans already use to manage this, which reduces the amount of relearning to accommodate an OSCAL-enabled system. What OSCAL is doing here is offering the potential for self-remediation and immediate reporting, thus reducing the time to resolution.
In a traditional human-centered approval to operate (ATO) process, the assessment of security controls is typically done once every three years. This means that a security flaw could be present from the moment of the last assessment until the preparation for the next.
Because OSCAL and, therefore, the computer can self-assess, this takes that 2 to 3 year timeline down to minutes. It truly closes a huge gap in security systems and reduces costs simultaneously.
How could any system owner not love this?
The government is aware of the inherent friction and performance challenges in creating secure systems and has developed guidance on how this must change; however, the ecosystem is large, and change takes a concerted effort on the part of leaders to understand and prioritize a resolution.
18F and NIST have both provided much needed guidance and have asked the industry to create the tools necessary to implement this.
OSCAL provides a perfect opportunity to extend industry concepts like Infrastructure as Code into the Compliance as Code space – making security an integrated part of the entire system lifecycle.
As security practitioners, system owners, and thought leaders we all need to keep our eyes on this movement and look for ways to champion its success.
About the Author
Bill Weber is the founder of Crypto Foundry, a Cyber Security and DeFi consultancy located in the Washington DC market. With over 30 years in Cyber Security, Crypto Economics, and Information Technology, Bill has worked with organizations like MIT’s Lincoln Laboratory, New York University, Hewlett-Packard, and Microsoft.
1 GSA Blog: Taking the ATO process from 6 months to 30 days
Published July 19, 2018
2 NIST Open Security Controls Assessment Language