FISMA

NIST Scientist: FISMA Rules Constructive

Achieving a Secure IT System May be Impossible
NIST Scientist: FISMA Rules Constructive
A big complaint about the Federal Information Security Management Act, the law federal departments and agencies must follow to secure their IT systems, is that those complying with its provisions merely prove they're following processes aimed at securing information assets, but they don't necessarily prove the systems are indeed secure. As Congress works to reform FISMA this year, lawmakers are seeking new metrics to determine whether government information systems are truly secure.

Still, says the National Institute of Standards and Technology's Ron Ross in an interview earlier this year with GovInfoSecurity.com (transcript below), departments and agencies that carefully follow the processes to secure their IT systems will, indeed, have more secure systems regardless of the new metrics adopted.

"Compliance could be interpreted as meeting the OMB checklist requirements, it could also be interpreted as meeting the NIST standards and guidelines," said Ross, NIST senior computer scientist and project leader for the institute's FISMA implementation project. "I think that's pretty much where the debate lies at this point, and getting back to the fundamental question that you posed, is complying with FISMA, that the checklist of FISMA requires does not always guarantee a secure system for all the reasons I outlined. What I can say though, is complying with the provisions of FISMA, which include the standards and guidelines, will by definition, make your system more secure. Whether it's fully secure, I don't believe that's going to be achievable, but it certainly open for debate."

Ross spoke with GovInfoSecurity.com Managing Editor Eric Chabrow. Their chat started with a simple question: What are the new metrics? But people who know Ross shouldn't be surprise that he provided a complex answer because securing IT is complex, and there's no simple answer to that question. In fact, Ross' answer furnishes a primer on the current state of metrics used to measure government IT. It's worth the listen.

RON ROSS: First of all, let me place this in some kind of a context here in the sequence, so we can really understand what we're talking about. There're really three pieces of FISMA, which are separate and distinct but interrelated.

The first would be the placement itself. FISMA came out in 2003 and from that legislation, there was specific requirements to create security standards and guidelines that would allow agencies to comply with the legislation. That's really the second piece that we've been working on since 2003, and we've come up with a suite of security standards and guidelines as part of our risk-management framework that organizations are now required to use. There are information security programs and deploy the safeguards and countermeasures within their information systems. The third piece would be the FISMA reporting process. It's been set up by the Office of Management and Budget, and there's also a component that the inspectors general get involved in, and that really is a good part of the compliance portion, the reporting portion of that process.

As we reflect on the FISMA legislation, having been working under it for about five or six years, it's important to try to focus any constructive criticisms in specific areas where we can make a difference and possibly make some changes. Also, it's important to look at the context. I continue to hear people use the words: secure information system, and it's important in today's world to understand that achieving a secure information system may not be possible, because of the complexity in the connectivity that we are routinely seeing within our federal systems, and I'm sure the same thing occurs in the private sector. It's a world of communication connectivity. We're partnering all over the place amongst federal agencies, with our contractors and with allies around the world, and massive connectivity, massive complexity. We've got hundreds and thousands of integrated circuits in the hardware; we've got 30 million to 40 million lines of code in the operating system; complex middleware on top of that, and then of course, the applications that sit on top of all of that to make some incredibly complex set of hardware and software and firmware to have to deal with.

That's also framing the context of when we talk about security for systems. On the threat side, threats continue to be very sophisticated, targeting many of our federal information systems. The adversaries know exactly what they want. They know how to go about getting it, and we are obviously have a more difficult and challenging problem of playing defense, trying to set up a defense-in-depth strategy that will, if not deter and stop the attacks, at least slow them down to a significant dent.

We now look at the state of the information system security, we still have many vulnerabilities. We can see everyday that there, we're kind of in a penetrate and patch mentality at this point, where new attacks are launched, we recognize the attacks, we create patches or fixes to mitigate those deficiencies, and then we go on to the next attack tomorrow.

Better Long-Term Strategy

The long-term solution that I wanted to start with is that we've got to come up with a better strategy for long-term for building more secure systems, that is a more disciplined and structured approach to how we build those systems and how we employ and use the technology. Very seldom today, when we put a new technology in place, we consider all the risks that are brought into the organization, which can affect mission and business processes. In other words, the ability for an organization to successfully carry out its missions today depends upon information technology, and that information technology must be dependable. In order to be dependable, we have to apply the right safeguards and countermeasures at the right places within the architecture and within the systems to really make a difference, and so that gets to mind my first kind of strategic point here, is that while we're looking for better metrics in everything we measure, we need to start a concerted effort to build more secure systems with good, effective enterprise architectures.

Enterprise architectures have a great affect of consolidating, standardizing and optimizing your ultimate information technology configurations. It makes for what I call leaner and meaner systems. systems that are understandable, and systems where you can deploy the necessary safeguards and countermeasures and have a higher expectation that they're really be effective in what we're trying to do. That, coupled with better commercial products, products that are more penetration resistant, everything from operation systems to databases, across the board, making sure that the security functions that are produced by vendors have a degree of penetration resistance that we need in order to stop some of these types of attacks.

Having said that, (there are) a whole lot of things we can do today with our current set of security controls, and we've worked very hard in the NIST-suited standards and guidelines to develop a risk-management framework, which at the heart, starts with the premise of the organization determining the value of the assets that they're trying to protect, and that asset would be primarily information. It's stored, processed and transmitted by the systems that are supporting whatever missions and business processes they're trying to carry out. That categorization of information, in essence, how important or how valuable that is, that really drives the selection of security controls that are deployed in the systems, and we've got the categorization standard that we've developed goes back to the old concept of the battlefield medicine, where you have a triage, and we have three categories: a high impact, a moderate impact and a low impact. Those are the three types of systems that an organization could deploy, and we define impact on the impact of losing that technology, that process and capability, of which regard to the mission.

In a high-impact system, the words that describe that in the standards, say that if you lose this information system, or it's breeched in some way, compromised, you would have severe or catastrophic affect on your mission, on your business processes. The moderate version of that categorization talks about a serious-adverse impact, but not severe catastrophic, and obviously the low types of systems are systems that are fairly routine. If they're lost, compromised, breeched in any way, limited adverse affect on the mission. That's where the whole risk framework starts out, and then from that, organizations start with a standardized set of controls that NIST provides, and then they have guidance on how to tailor those controls, and so they come up with the appropriate set, which are sufficiently strong to protect whatever missions they're asked to carry out.

Debate Over the Right Metrics

Once those controls are implemented within a system, we have some extensive guidelines on how to assess those controls to see if they're effective. This is where we start getting into metrics, and this is where the debate comes in, are we using the right metrics, but that - the way you measure security, is tied back to what you're measuring. At this point in time, we've got a very robust set of security controls that are rated in 17 families, and they cover everything from the guards, guns and gates - physical security; to personnel security - policies, procedures, technical controls, access control mechanisms, auditing, encryption, systems communications protection, controls, incident response. The whole family of 17 controls are rated in the areas of management, operational and technical controls.

Most of the organizations take those controls and implement those. And, they're assessed for effectiveness and try to determine, are those controls - safeguards and countermeasures, another way of saying that - implemented correctly, are they operating as intended, are they producing the desired effect with regard to meeting your security policy, whatever that policy might be? That's the first place we start to measure what we've put in place. When you get back to the overall concept of managing risk, we're looking at continuing threats which may exploit vulnerabilities that are currently existing on some of our systems, and a threat that exploits vulnerability, obviously causes an impact to the mission on the other end, and so, the security controls that we've defined are intended to reduce those number of vulnerabilities to a manageable point, where the residual risks that we assume is tolerable.

At the end of the day, with whatever resources we're able to apply to the security problem into the management of risk, we have to come to a comfortable conclusion, and this is the senior leadership that I'm talking about, that whatever we've done with regard to protecting these systems that we have in front of us that are part of our organization, that are supporting our critical missions and businesses processes, we've done enough to make sure those mission are not in jeopardy, and that is what we call the residual risk. When we talk about a secure system, that really is a misnomer. We're trying to reduce the level of risk that we have in our systems, recognizing that the complexity and connectivity will likely prevent us from ever having a fully secure system, at least with the state of technology today, how we use that technology, and our ability to provide a defense in depth strategy.

I just introduce this topic to give you a frame or a context for how we've evolved to where we are today. Calling for new metrics, different metrics, there are several groups out there, and this may be reflective of what you were hearing from some of the senior IT professionals, the question that they put on the table, is that in their view ... FISMA compliance doesn't necessarily imply or mean you've got a secure system. You can do all the check boxes and still not have a secure system. I would agree with that, because there really is no such thing as a secure system. We can reduce our risk to a good degree, to a manageable degree and a tolerable degree, but there is never a hope at this point of nailing everything down all the time. Perfection is just unachievable at this point, because the defense is always more difficult than the offense. The threats, the adversaries, they can pick the time, the place, the intensity of the attack, they have the capabilities, the resources, the intentions, all the things that characterize the threat's pace, and we have to defend 360 degrees all of the time. That's a tough order. Some of the things that we're trying to do, and you've seen some of this -- are you familiar with the FDCC project, Federal Desktop Core Configuration?

CHABROW: Yes.

ROSS: Well, it kind of goes to a very basic principle of computer security, and this is really in contrast to the way vendors typically deliver products. Our systems are composed of many different types of commercial products, and some of them are not commercial; some of them are GOTS (government off-the-shelf) products; most of them are commercial, off-the-shelf. Then there's like a deliver products that I call wide open, maximum functionality. Security folks like to come at us from the opposite end of the spectrum. We like the concepts of least privilege, least functionality. In other words, ports, protocol, services, functionality, only turn on things that you need to accomplish the mission.

The Federal Desktop Core Configuration, or loosely described as the ability for us to work with vendors to describe good ways to configure information technology products. ... The switches you can flip within these products to either enable or disable certain capabilities. For example, if you've got a flash drive that you plug into your laptop, and there's an auto-execute command, whatever software is on that flash drive gets automatically executed. That's a setting that can be either enabled or disabled, and these kinds of settings under the Federal Desktop Core Configuration, OMB has a mandatory set of settings now which are required for all federal desktop computers.

Testing Configuration Effectiveness

What we're trying to do there is close down what we call attack vectors. Attack vectors are avenues in the military sense, it's an avenue approach for an adversary to attack and compromise your system. These configuration settings are intended to net down the avenues of approach or attack vectors that adversaries can throw at us, and that's really an articulation in the concept of least privileged and functionality. That's a very good thing. We're also employing automated tools to test our work stations and our desktop configurations to see if those are in compliance with OMB requirements. You're seeing here another area of metrics allowing us to test the effectiveness of our configuration settings, which is really down at the ground level of where everything all comes into reality with regard to security. That's another area that's been very successful, and we've tied a lot of the desktop configuration settings work and the automated tools, we've tied that to several large programs that NIST has been involved in, the common vulnerabilities enumeration project, the national vulnerability database and tying those configuration settings back to specific security controls that are in our security controls catalog, the 17 families that I mentioned earlier. These are all very positive steps in the area of metrics that we're making great progress on.

There is still a debate on whether we are assessing the right things at the right time, and that's the why I brought up the fact that FISMA compliance is not only about making sure the NIST standards and guidelines are implemented, but also about how OMB gets the feedback back up to the their offices on how the organizations are doing. OMB defines every year, what they call their FISMA reporting guide, and there's a long check list of questions things that they look for within the agencies.

Compliance could be interpreted as meeting the OMB checklist requirements, it could also be interpreted as meeting the NIST standards and guidelines. I think that's pretty much where the debate lies at this point, and getting back to the fundamental question that you posed, is complying with FISMA, that the checklist of FISMA requires does not always guarantee a secure system for all the reasons I outlined. What I can say though, is complying with the provisions of FISMA, which include the standards and guidelines, will by definition, make your system more secure. Whether it's fully secure, I don't believe that's going to be achievable, but it certainly open for debate.


About the Author

Eric Chabrow

Eric Chabrow

Retired Executive Editor, GovInfoSecurity

Chabrow, who retired at the end of 2017, hosted and produced the semi-weekly podcast ISMG Security Report and oversaw ISMG's GovInfoSecurity and InfoRiskToday. He's a veteran multimedia journalist who has covered information technology, government and business.




Around the Network

Our website uses cookies. Cookies enable us to provide the best experience possible and help us understand how visitors use our website. By browsing govinfosecurity.com, you agree to our use of cookies.