RSA 2010: Warren Axelrod on Information Security
Axelrod is currently executive advisor for the Financial Services Technology Consortium. Previously, he was a director of Pershing LLC, a BNY Securities Group Co., where he was responsible for global information security. He has been a senior information technology manager on Wall Street for more than 25 years, has contributed to numerous conferences and seminars, and has published extensively. He holds a Ph.D. in managerial economics from Cornell University, and a B.Sc. in electrical engineering and an M.A. in economics and statistics from Glasgow University. He is certified as a CISSP and CISM.
TOM FIELD: Hi, this is Tom Field, Editorial Director with Information Security Media Group. I am here at the RSA Conference, and I am talking with Warren Axelrod, who is affiliated now with the Financial Services Technology Consortium.
Warren, it is a pleasure to talk with you.
WARREN AXELROD: My pleasure, too, Tom.
FIELD: Warren, why don't you give us a sense of everything you are involved with these days because I know you are wearing multiple hats, and you have got multiple projects going on.
AXELROD: Well, the main one is the Software Assurance Initiative that I am managing for FSTC, and that is a broad look at application security and software assurance, both in terms of the development lifecycle, operations, testing, and potentially setting up an industry lab to test the highly used critical software. And what we have done is we brought together groups from some of the major financial institutions, from academia, such as Carnegie Mellon that is obviously big on software engineering, and also companies such as Microsoft, Northrop Grumman and so forth, and also some of the government regulatory agencies, although they are not participating, they are listening in.
What we are trying to do is establish preferred policy and practices for financial services as a whole in the software assurance area. The reason for this is that the regulators are increasingly looking at the application layer, which traditionally has been very neglected, and they are specifying in some cases with quite a level of detail what national institutions would be expected to do.
So for instance, the OCC, the Office of the Comptroller of the Currency, in 2008 put out a bulletin purely on software assurance and application security. And they went down to such a detail where they, in one of the Appendices, they listed the 10 top OS vulnerabilities, which to me is amazing in a sort of regulatory guidance document.
The other area where there is a lot of oversight activity is in the payments side, the PCI data security standard, particularly Section 6, which looks at application security, and this is an evolving area for them. I think there is just a general recognition that many of the successful exploits at the application level, and that traditional information security professionals have not been strong in that area.
One of the reasons for that is that the knowledge required is much broader. You have to understand the business; you have to understand software design and development and testing, as well as some of the broader security issues. Traditionally, security professionals have come out of the engineering network and systems area with very little applications experience. So there has been a lot of evangelizing by people such as Gary McGraw to try and get that idea across.
FIELD: Now, Warren, you have been in information security for a number of years now. We are seeing a lot of fraud, we are seeing payments issues as you discussed. What are the security concerns most top of mind for you these days, especially as related to financial services?
AXELROD: Well, I think there is really one area that stands out to my mind, and that is the insider threat. My own belief is that it is grossly under reported; and there are many reasons for that. Probably the main reason is that a lot of it goes unrecognized. If we don't have the monitoring tools in place that are sophisticated enough to catch the insider, and the insider has, in general, authorization and is authenticated through the system and given a lot of privileges and can operate under the radar in many regards. It is hard to detect it because it is usually only often after the fact that some anomalous behavior is recognized.
One of the reasons I believe that it is understated is that most of the time that you find out about fraud, either internal or external, is not from the institution that was hacked. So you have Choice Point, you have Heartland, and a number of others, and you look at where did that come from? Where did the recognition that they had been compromised come from? And it usually comes from the Visa's or MasterCards who start detecting fraud, and then they trace it back to a breach at a particular service provider, and then they have to go through a whole forensics effort in order to determine what actually was done.
So, I think that there is a real issue in regard to firms, particularly financial firms, being able to detect in real-time this fraudulent or evil behavior, however you define it, and it is also very necessary if they don't detect it in real-time to be able to go back forensically and see what actually happened. I am actually going to speak to this on Friday. For example a situation like Choice Point -- it was really a bad breach, but it was exacerbated by management not knowing when and where and which accounts had been actually breached. It took many months, and they kept changing their estimates, and that was probably worse for their publicity and, you know, how customers felt about them, than the actual breach itself.
FIELD: That's a good point. Two areas I want to ask you about in particular, and the first is ACH fraud because we are seeing a big rise in that now where small to mid-sized business customers are finding their banking credentials compromised, and they are losing tens of hundreds of thousands of dollars. What can financial institutions do to help their customers so that they aren't being taken advantage of by the fraudsters?
AXELROD: Well, my own view is that too much of what we do in information security focuses on big companies. For example, I was at one of the sessions with Gary McGraw from Cigital and David Ladd from Microsoft and they talked about the BSIMM, which is the build security in the maturity model. Gary was very specific. He said we deal with huge companies, and they do. Their sample of some 30 companies are all very big companies, and David Ladd, who is one of the promoters of the Microsoft SDL, mentioned that well these things have to be scalable. It is not one size fits all.
What really happens, in my view, is that the smaller institutions, whether they are smaller financial institutions or just regular corporations, are given short shrift in terms of how to protect themselves. One of the issues is that clearly they can't afford the full blown security programs and employees that a big institution can, but the other thing is they don't recognize that the risk is as great as it is to them.
So my feeling about that is that they should be looking more to outsourcing for some of the security services rather than trying to build their own, which makes no economic sense. Then you get into the risk of outsourcing some of these critical functions to third parties and how you control that, because you may in fact be getting out of the frying pan into the fire. You are trading off one form of fraud for another form of attack, which actually becomes a virtual insider attack. So the nature could change unless you do it properly.
FIELD: One more area I wanted to ask you about is regulatory reform. You were talking about the OCC's bulletin from a couple of years back. I think there is an expectation now that regulatory reform is coming in banking sooner or later. How do you think that is going to impact information security as opposed to just the business part of banking?
AXELROD: Well, I felt for a long time, Tom, that regulation and compliance, legal aspects of the business and as they relate to security/privacy, are in fact the biggest drivers, and there is no question that there is a move towards more security enforced through government mandate.
For instance, I saw some recent Senate hearings, Senator Arthur Rockefeller is trying to push the cybersecurity bill, and there are two aspects to government involvement. One is that by making certain things mandatory, those areas are really fortified to a much greater extent.
The problem is that the areas of focus of the regulators are not necessarily, and in fact are often not, the ones where the greatest risk is. Because the regulators for example are very much driven by their constituents who are subject to identity theft and fraud, and so there is a bias toward that, and perhaps too much of the funding goes toward that and not toward other areas which become the next vector for attacks.
Now, the banking regulators in particular have expanded into authentication methods, not specific but basic guidance on strong authentication, but it is well known that for instance the PCI standards -- you can be compliant, but still be subject to an attack or a breach. So my view is that regulation does play a part, but it is not the answer to everything; it is necessary, but not sufficient.
Companies have to recognize that they have to add to that. Clearly, even if they don't believe that the regulators' direction is the best one--I have some thing that I call regulatory risk, it is the risk of non-compliance of the regulators. Because what happens is the reason you comply is that it is very painful not too, and that is a very tangible pain that can be expressed to management. The pain of a breach and the reputational loss and those kinds of things are really kind of fuzzy, difficult to put your finger on, and perhaps in areas where you can take a risk and sort of delay doing anything, but the regulatory side is not one of those.
So in general, I think that governmental direction is helpful. I think that the multi-state privacy notification laws are a disaster. So, there is a strong sense of a single federal law to supercede those, and that would be much easier to deal with. You just have to comply with one set of standards and not with the other because a lot of the larger financial institutions clearly are in every state, and what many of them do is they sort of take what maybe you can call the highest common denominator; they take the most stringent state law and apply them all over so that they won't be deficient in one area, which is not very efficient to start out with.
And there is a real question, I haven't followed it fully, but the Massachusetts law is very prescriptive, and it doesn't necessarily reflect the areas of greatest risk. I think my own view on data loss is that if the data is truly lost, it probably isn't necessarily a bad thing. If the data was stolen, then you know that the criminals are after that specific data.
So most of what is reported either turns out not to have been an event or privacy breach, as in the Veterans Administration laptop being stolen, and in many cases discs or tapes are lost and people finding them, people even finding laptops, they are not interested in the data. So what we are doing is in effect overcompensating for cases where there are very specific breaches.
FIELD: Now, Warren, I was surprised to find that you were speaking at the RSA Conference for the very first time. Congratulations for that.
AXELROD: Thank you.
FIELD: Give us a preview of what you are going to be talking about when you speak tomorrow.
AXELROD: Okay. As I mentioned before, I am managing the Software Assurance Initiative for FSTC, and we have some really bright folks on the working group, and we were having one of our working group meetings and the topic came up of being able to measure the strength of applications. The reality is that there is a real lack in what is called instrumentation, or data collection, in applications where the data can be used for security or breach determination or anomalous behavior and so on; so that came out of that discussion.
My presentation is essentially: Build data collection into your applications. Now, the common secure software development lifecycle has certain security principals woven through it, so secure design, secure coding practices, testing for security, all of those things are the types of things that the security person with some application knowledge can generally do.
The idea that there should be a person who understands the business use of the application, how applications are developed, as well as understands the security aspects should be in on the design stage and say, 'Well, we really want to know when somebody comes into the system what they touch, which functions they touch, which data they touch, what they do with it, how they hop through the systems, all of these are indicators of activity and potentially malicious activity that we don't have any sense of because we are just not collecting the data."
I mean this is just like the Toyota situation, where one of the problems for defining the problem that they are having is that the black box is relatively new technology, and then there is the question of whether they will share the data, which is actually a common issue with security in general (the information sharing), but you need the instrumentation. In order to design the instrumentation, you have to know what you are doing from the security perspective.
So it is really a very simple concept, but something that is pretty well neglected because you don't have a lot of people who have this broad view of the business use and what is the appropriate use of the application, and at the same time understand where the vulnerabilities might be.
FIELD: Very good. You have been at the event this week.
FIELD: What are sort of the resonant themes that you hear coming out in your conversations?
AXELROD: Well, I mentioned to you earlier that it is like drinking from a fire hose that there is so much coming at you.
There was a very interesting presentation from somebody with Amazon web services talking about cloud security, and this is a big issue. My feeling about that, and what really came out of the presentation by somebody called Steve Riley, was that these cloud services providers are willing to talk. You have sort of the customers on one side saying "Oh, it is terrible, we don't know where the information is, and we don't know what it is running on, and we don't have good security and so forth, and it is all hypothetical." If they would sit down with the cloud services providers and say "These are my issues; can you help me address them" - they are waiting for that. He really said "Come talk to me, and let me know what you are looking for."
So I think for the adoption of cloud services by financial services in particular, that dialog has to take place because you can secure the cloud, you can meet all the regulatory requirements, you can carve out part of the systems and even the real estate if you talk to them. It may cost a little more, but you still save a bundle on the overall services and support. So I think that is one of the major issues.
I think that there is a lot of misrepresentation of the cloud, people saying there is not a lot new; I like to call it outsourcing on steroids. Because actually it may not in structure be all that new, but in practice it is an entirely different model, and you have to understand the economic workings of the model, the risks and so forth, and you have to start addressing those, certainly on the virtualization aspects and so forth. That is not being done yet.
I think a lot of people are running around saying either this is the next great thing or it is nothing different or, well, we have got to worry about security and privacy. Yeah, but do something about it, and just don't say that it is an issue and engage in the dialog with the providers. Because if you don't do that, they are imagining one thing, and you are imagining a different thing, and you are not going to have a meeting of the minds.
FIELD: Warren, very well said. I appreciate your time and your insight today.
AXELROD: My pleasure. Thank you.