Managing Unintentional Insider ThreatsResearcher Randy Trzeciak on How to Mitigate the Risks
How can organizations mitigate the risks posed by unintentional insiders who, by mistake or through social engineering, compromise sensitive information? The strategy requires a combination of technical and non-technical solutions, says researcher Randy Trzeciak.
In the case of social engineering, organizations can introduce technical controls that could help minimize the impact of, for example, an employee clicking on a phishing e-mail and allowing malware onto the network, says Trzeciak, senior member of the technical staff at the CERT Insider Threat Center within the Software Engineering Institute at Carnegie Mellon University.
"But also it could be in the form of security awareness training, training your employees, contractors and subcontractors on what could be a suspicious e-mail and what you should do if you encounter or are presented with a suspicious e-mail," Trzeciak says in an interview with Information Security Media Group [transcript below].
For more than a decade, researchers have studied the impact of malicious insiders. The unintentional insider threat has only recently come under scrutiny. According to the Insider Threat Center, the unintentional insider threat is defined as:
"A current or former employee, contractor, or business partner who has or had authorized access to an organization's network, system, or data and who, through action or inaction without malicious intent, causes harm or substantially increases the probability of future serious harm to the confidentiality, integrity, or availability of the organization's information or information systems."
"Most people don't intend to disclose information," Trzeciak says.
Still, organizations can have measures in place to ensure employee mistakes don't become a larger problem.
In the case of an employee taking a laptop out of the organization, if it gets lost or stolen, the organization can minimize the impact by having controls such as full-disk encryption in place so the information on the device can't be compromised, Trzeciak says.
In an interview on this latest insider fraud research, Trzeciak discusses:
- Fundamental technology controls to mitigate insider risks;
- Results of a new international insider threat study;
- Best practices in identifying and responding to insider threats.
Trzeciak heads a team focusing on insider threat research; threat analysis and modeling; assessments; and training. He has more than 20 years' experience in software engineering; database design, development, and maintenance; project management; and information security. He also is an adjunct professor at Carnegie Mellon's Heinz College, Graduate School of Information Systems and Management. Trzeciak holds an MS in Management from the University of Maryland and a BS in Management Information Systems and a BA in Business Administration from Geneva College.
Edward Snowden and Insider Threats
TOM FIELD: Everybody has been talking about the insider threat since the development of the Edward Snowden situation. From your perspective, what attention has this brought to the topic you've been researching for so long?
RANDY TRZECIAK: Anytime there's a high-profile case involving matters of security and national security, you tend to expect increased awareness in terms of identifying what the threat is and what the impact to the organization is as it relates to what particularly this insider or another insider did or did not do as it relates to a case.
As we've done research over the years, we've found that many of the incidents are handled internally by organizations and really don't involve law enforcement. But on occasion, when there has been significant impact to organizations that do involve law enforcement, many times those are picked up and reported through the media.
In terms of increased awareness, it's certainly something that does provide some value from organizations due to the other organizations that are impacted. But also, from an organization's standpoint, they should really be concerned about protecting their assets, critical information, critical technologies, facilities, people and they need to protect it from a number of threats, which would include insider threats but also external threats as well.
Unintentional Insider Threats
FIELD: I know you've done a number of research projects recently. I want to ask you about some of them. The first one is the unintentional insider threat. Based on your latest research, what do you find to be the characteristics of the unintentional insider?
TRZECIAK: For years we've doing research at the Insider Threat Center focused on the malicious insider. We think we've done a pretty decent job of describing the motives and impacts of insiders who intend to harm organizations. But from an organization standpoint, they really do need to be concerned about impacts to their critical assets whether there's malicious intent or there's non-malicious intent.
We've focused on the unintentional insider threat, and we use a similar definition when defining the insider threat from an unintentional perspective. We're concerned about looking at individuals in an organization, which still does include current or former employees. We tend to then include the contractors or trusted business partners as well in that definition. What we really differentiate between a malicious and the unintentional insider threat is the unintentional; they do have authorized access to networks, systems or the data, and, through some type of action or inaction and without malicious intent, they do cause harm to the organization's critical assets.
Some of the things that we found interesting in terms of collecting these incidents is we try to use the empirical data approach of trying to find as many incidents as possible, code them in our database that we have here and then try to do analysis to look for the patterns. After we have collected a number of cases of the non-malicious unintentional insider threat cases, we tended to break those down into two primary categories. The first one we would categorize as almost insider negligence where we would see impacts to the organization along the lines of accidental disclosure. For example, if I take some type of device off of the corporate network and I happen to lose that particular device, it would be some type of accidental disclosure. There wasn't really malicious intent but it's something that the individual did or didn't do that allowed the disclosure of information.
The second categorization is focused around some type of malicious code, almost a hacking type where an insider is involved but not with malicious intent. For example, somehow outside of the organization social engineers send a phishing e-mail and they open up the e-mail. Another example might be that someone provides someone a USB device within the organization and then some type of malicious code is introduced onto the network or onto the system.
Then we certainly have some cases that we've analyzed related to the physical security, such as loss of physical records; something was lost or stolen, paper documents as well. We tend to break those down into three different types of categories and it tries to fit around the cases we've seen. Once we describe those particular impacts to the organization, then we try to offer some types of mitigation strategies to help organizations going forward.
Unintentional Insiders: Key Findings
FIELD: Based on those characteristics and those three categories you described, what would you say you emerged with as key findings about the unintentional insider?
TRZECIAK: From an organizational standpoint, there are things that organizations can do to hopefully reduce the risk that someone could unintentionally harm an organization's assets. Some of it could be including technologies or controls that they could introduce. But also we found that there's some information technology best practices, in addition to the organization's best practices, that should be introduced to include things such as security awareness training or improving the way that communication is being conducted across the organization to raise awareness of the insider threat, but also to include the non-malicious insiders as well in that threat model.
For example, if we were to use the first example of the accidental disclosure, most people don't intend to disclose information. But in the example we talked about before, where someone takes a laptop off the organization network and it does contain confidential or sensitive information, [protection strategies could be put on that device that, if the laptop was lost or stolen] would not allow the person to access the information. It could be technical controls, full-disk encryption or other types of strategies where the information could only be made available once it's connected to the network.
If we think about other types, like social engineering, the malicious code and the phishing e-mail attempts that come about, there could be technical controls that could prevent the impact of someone clicking on a phishing e-mail and the malware getting onto the network. But also it could be in the form of security awareness training, training your employees, contractors and subcontractors on what could be a suspicious e-mail and what you should do if you encounter or are presented with a suspicious e-mail.
FIELD: I'm glad you talked about controls. If you were to talk about technical controls organizations could put in place to avoid the pitfalls of social engineering, for instance, or to help to track these mobile devices that get lost and stolen, what would you say are fundamental technical controls or even process controls that organizations need to have in place?
TRZECIAK: ... It really focuses on the organizations and it needs to start with the organization identifying what its critical assets are. For example, if an organization is concerned about a data disclosure event that would compromise a key piece of information in the organization, certainly the protection strategy or the control should be focused on the prevention of data leaving the network and causing harm to the confidentiality of that particular key critical asset.
If you're trying to protect in an organization information from leaving, there are a number of protection strategies that may be effective within the organization. There are a number of categories with tools such as data loss prevention which would stop information from leaving. Other types of strategies that may be effective would be things such as digital management. That particular category of tools would only allow information to be accessed while on the corporate network. There are a number of controls that will be effective to hopefully prevent a data disclosure event.
Another type of incident that may be impacting would be the example we talked about before, the malicious code being introduced onto the network and systems. A number of controls can be put in place that could prevent the malicious code from being downloaded from a website or introduced onto the network through a USB device. There are a number of controls that would not allow an unauthorized device to be put onto the network and system. It's really focused on an organization's risk profile of what they're trying to protect. If you're trying to protect a data disclosure event, you would use one category of tools. Another type of incident which might be categorized or grouped as IT sabotage, there could be another category of tools, and then a third set of tools that may be effective when you're trying to prevent to detect fraudulent activity on your network and systems.
International Best Practices
FIELD: I want to go into another direction now and talk about another research project that you worked on. I know that you recently compiled some best practices across multiple nations. What was the genesis of this particular research project?
TRZECIAK: One of the things we've been asked over the years is to provide some guidance to organizations that have department and organization units outside the U.S. One of the valuable assets and one of the valuable resources we provide is describing how insider incidents traditionally occur within organizations. But up until this point, we really only focused on incidents that occurred in the U. S. What we've been asked is could we provide some guidance and recommendations for organizations that have international operations.
One of the strategies and one of the recommendations we made is for organizations to consider the Common Sense guide to Mitigating Insider Threats, which are the best practices that we've released for organizations. In that particular document, which is available on our website, we outline 19 best practices that we recommend organizations consider when trying to protect or to mitigate insider threats within their organization. What we've done is gone through those 19 best practices and ask organizations to consider that there may be international considerations when trying to deploy controls or to do things such as security awareness training or do things related to trying to protect critical assets. ... There's particular effort, but best practices against insider threats in all nations is really asking organizations to consider implementing the 19 best practices and then look at the international considerations. That may be something they want to consider prior to implementing those best practices on organization units outside of the U.S.
Changing Threats Across Geographies
FIELD: How do you find that the insider threats and controls vary if at all across the varying geographies?
TRZECIAK: One of the things that we've tried to do is describe to organizations the technical observables as well as the non-technical observables that organizations should consider looking for when trying to identify people that may be high risk at harming the critical assets. When we describe the insider threats within the U.S., as you've seen in the past publications, we tend to describe those different impacts to the organization. When we say that it impacted the organization, it could be an IT sabotage event. We tend to describe a disgruntled system administrator who's trying to get revenge against the organization for a perceived injustice. That looks different from someone who steals intellectual property in that we've seen that the people who tend to steal intellectual property take within 30 days of announcing they're going to leave the organization. They tend to take a key intellectual property that gives them a business advantage in a different organization. They tend to take the information to go to a competitor to start a competing organization, in some cases to benefit a foreign organization or a foreign government. That looks different from the saboteurs and looks different from people who defraud the organization, whereas those people are motivated by financial gain. They tend to do it for a longer period of time and it impacts the financial bottom line of the organization.
When we describe those incidents in terms of the impacts the organization and the observables both technical and non-technical, those tend to be pretty consistent across all organizations whether they be in the U.S. or outside the U.S.
Where we're seeing organizations challenged is the actual implementation of controls when we look outside the U.S. When you start looking at what we can do from a U.S. perspective of implementing monitoring strategies or protecting strategies, there are certainly U.S. laws and regulations that need to be considered. What we ask organizations to do is, if you're deploying those same types of technologies and same types of controls for the same types of monitoring strategies, consult the legal counsel within those organizations if outside the U.S. prior to deploying those strategies. From an international perspective, in some organization units outside of the U.S. there's certainly less oversight and control. But in some there's significantly more in terms of protecting the privacy and the civil liberties of the employees, contractors and subcontractors within those organization units.
Key Best Practices
FIELD: What were some of the key best practices that you identified that really transcend national borders?
TRZECIAK: The best practice that we recommend is to consider what you're trying to protect when you start looking at knowing your assets - best practice six in the Common Sense Guide to Mitigating Insider Threats. It all starts with the organization knowing what they're trying to protect. When we describe the protection strategies, it needs to focus on four key critical asset types: protecting your people, protecting your facilities, protecting your information and protecting your technologies. Knowing that and prioritizing that is up to the organizations to know exactly what they're trying to protect, who has authorized access to what they're trying to protect and who should have authorized access to what they're trying to protect. Regardless of where the organization unit is, it's critical that they know what their assets are, which then will help them to be able to go ahead and try to perfect the strategies.
Then when we go into a different best practice, number 10, which is to institute stringent access controls and monitoring policies on privileged users; that's something that we do need to protect. The privileged user is certainly a vulnerability that could be exploited if there are not appropriate controls. Do you have the appropriate controls to really protect your information assets from people with privileged access, like someone within your IT department? Many organizations challenge their organization units to implement separation of duties and business processes. We're certainly asking organizations to consider implementing their same types of separation of duties or dual controls within their IT departments as well.
Finally, we recommend you develop a formalized insider threat program. That really does need to be inherent in all organization units enterprise-wide. How do we identify individuals who may be more at risk? If we identify some suspicious activity or formal process that they need to follow in terms of incident identification as well as remediation, and then in some cases recovery, what's the formal program to share information across the organization that would not necessarily be shared without a formal program or agreement in place to protect the civil liberties and the privacy of the employees?
Insider Threat Programs
FIELD: If I could follow-up on that please: Do I understand correctly, from a previous conversation with you, that you're seeing more entities now pressuring organizations to stand up a formal insider threat program?
TRZECIAK: We're seeing organizations being asked to consider implementing formal programs, certainly within the U.S. The government organizations needed to respond to Executive Order 3587, which was the White House's directive to ask organizations to stand up a formal insider threat program which was released in November of 2011. Certainly those organizations are required to stand up programs, and that really is at the crux of trying to protect classified information. The U.S. government has asked the departments and agencies to stand up formal programs.
As we start looking at the organizations that support the federal government, including contractors, there's some guidance that should be coming out or will be coming out in the foreseeable future that should ask those organizations to form the formal insider threat programs as well. That's certainly a key area that we're looking for in terms of the next generation of research. How can we assist the organizations both within the government and outside of the government to be able to allow them to stand up an insider threat program to allow them to provide some way to assess the programs to see that those insider threat programs are effective or [have] ways they can improve their programs that are in place now.
Upcoming Research Projects
FIELD: Final question, and you hinted at the answer a few minutes ago when you talked about the research into organizations standing up an insider threat program. What are your next topics of research?
TRZECIAK: Certainly that's something that we're looking into today. How can we provide guidance for organizations to stand up programs? How can we provide ways for organizations to assess how effective their programs are? We've continued to do research in terms of providing organizations the abilities to do insider threat and vulnerability assessments. That's something we'll continue to be able to build and offer as services we can help organizations with.
Our foundation here is empirical data: collecting incidents, analyzing the incidents, relooking for common patterns that we can describe to organizations that they should consider when trying to prevent or detect malicious or even unintentional incidents from happening within their organization. I believe that will be our next area of research related to the six-month to twelve-month timeframe from our insider threat program here at CERT.