In The Boardroom™ With...
Mr. John Diamant
Distinguished Technologist, CISSP, CSSLP
HP Secure Product Development Strategist & U.S. Public Sector Application Security Strategist
HP Enterprise Services, U.S. Public Sector, Cybersecurity Solutions
SecuritySolutionsWatch.com:
Thank you for joining us again today, John. It's been more than a year since our first meeting when we spoke about the growing business need to architect and design security up-front during the application development process in order to avoid costly application security vulnerabilities and weak enterprise security. And, during this past year we’ve seen a growing wave of cyber attacks that are more sophisticated and effective than ever before. What are your thoughts about the current cyber threat environment?
John Diamant:
Attackers are better funded and more motivated today than they were in decades past. It used to be that the primary motivation of attackers was fame or just to prove that they could successfully exploit a vulnerability. Today, a large percentage of attacks are motivated by identity theft, credit card theft, industrial espionage, cyber warfare or “hactivism”, and some are Advanced Persistent Threats (APTs). In these cases, the motivation is significant financial, political or strategic gain and, as a result, the attackers are often well funded and more sophisticated. Because of that, reliance on security through obscurity doesn’t work, and cybersecurity defenses need to be considerably stronger than in the past, as the effectiveness of the defenses need to rise with the motivation and funding of the attackers.
We’ve also seen an increased media focus on prominent, prevalent vulnerabilities (the concern being that these prevalent vulnerabilities can become the avenue of future attacks), recently with Heartbleed (the SSL vulnerability) and Shellshock (the Bash vulnerability).
SecuritySolutionsWatch.com:
Are you still seeing an imbalance between where the exploits happen and where the security spends are occurring?
John Diamant:
Yes, although the trend is in the right direction. Some encouraging signs include the creation of the Certified Secure Software Lifecycle Professional (CSSLP) certification a few years ago, the Open Software Assurance Maturity Model (OpenSAMM), the National Institutes of Standards and Technology’s (NIST) recent Special Publication 800-160, Systems Security Engineering: An Integrated Approach to Building Trustworthy Resilient Systems, and the U.S. Department of Homeland Security’s (DHS) Build Security In program, including their focus on Resilient Software.
A few years ago, we’d seen indications that the percentage of IT security spend on application security was in the low single digits. Today, it’s risen to around 10 percent. While that’s in the right direction, it’s still a significant under investment with the vast majority of successful exploits still have software vulnerabilities (like Heartbleed and Shellshock) as their underlying cause.
SecuritySolutionsWatch.com:
Can you tell us more about NIST Special Publication 800-160, Systems Security Engineering: An Integrated Approach to Building Trustworthy Resilient Systems and DHS’s resilient software initiatives?
John Diamant:
Certainly. First, NIST SP 800-160 is, as its subtitle indicates, an approach to building trustworthy, resilient systems. It’s really about bringing cybersecurity under the engineering discipline, and engineering security in, just as we’ve been saying and doing at HP for many years. While the contents of the draft document are interesting, what I find most interesting is the signal it provides that the government is getting serious about developing expectations around building security in, which is much needed. Insight into how NIST views the problem is in their note to reviewers: “The project supports the federal cybersecurity strategy of ’Build It Right, Continuously Monitor’. In other words, continuous monitoring will only be manageable if you also place emphasis on building the software correctly in the first place. This is, of course, a major challenge in that most legacy software and even much of the software developed today is not built with security in mind, but only as an afterthought.
DHS’s resilient software initiatives are, in part, based on the education program around building in security, as I referenced above. But it is supplemented by a to-be-funded follow-on to DHS’s Continuous Diagnostics and Mitigation program, called Critical Applications Resilience.
SecuritySolutionsWatch.com:
Tell us more about Resilient Systems and Critical Applications Resilience? What does it mean and why are these important?
John Diamant:
A dictionary definition of resilience is “the ability to become strong, healthy, or successful again after something bad happens” (quoting Merriam-Webster). In the case of cybersecurity resilience, the something bad is an attack and the fallout from the attack. The resilience can be accomplished through a combination of factors. Those factors include:
- avoiding, finding and fixing vulnerabilities (the fewer vulnerabilities, the less likely an attack will succeed)
- survivability—the ability for software or systems to survive an attack
- detectability—the ability for software or systems to detect and attack
- recoverability—the ability for software or systems to recover from an attack and return to normal operations
SecuritySolutionsWatch.com:
What’s involved in creating resilient software?
John Diamant:
Quite a lot, actually. The first challenge to overcome is building security into every phase of the Software Development Life Cycle (SDLC), starting with requiring, architecting and designing security in, such as with HP’s Comprehensive Applications Threat Analysis (CATA) process, rather than expecting merely to test it in or hoping that the software won’t be attacked since it lacks the resilience to withstand an attack.
And software isn’t just in computers anymore. With the Internet of Things, software can be in almost everything, from your credit card (at least new to the U.S. but common in some parts of the world “chip and PIN” or “chip and signature” cards, which contain a processor and memory), automobile, phone, smartwatch, door locks, refrigerators, thermostats, sensors, pacemakers, etc.
Then there’s the challenge of shoring up the security of legacy code which was developed when the Internet was a safer place. Today it’s become the equivalent of a dark alley in a bad neighborhood.
There’s the need for continuously monitoring and patching software, and the idea of active defense and response of either the software or infrastructure security deployed with the software (for instance Intrusion Detection Systems and Intrusion Prevention Systems, which can detect an attack or actively respond to one, respectively). This can be done at either the infrastructure security layer or the application or both.
There’s even reputation-based resilience and active response, where an application or system could detect suspect traffic by reputation and respond differently to trustworthy incoming data and suspect data.
All this and more is covered in the HP Technical Whitepaper, Secure your critical applications: Perspectives on Critical Applications Resilience.
SecuritySolutionsWatch.com:
It seems to us that in this environment, Public Sector IT professionals are truly facing an unprecedented challenge. What are the HP Applications Security Services for public sector that your team brings to the table and why should readers be concerned with adopting them?
John Diamant:
We offer applications security services which cover the full lifecycle, from the very earliest stages with thought leading methodology in our HP CATA process to require, architect and design security in, to mid-lifecycle security to find and fix vulnerabilities in source code with Fortify and human expert security code review, to applications vulnerability assessment and penetration testing, to find and fix vulnerabilities in run-time behavior. We also offer a quick and simple lifecycle security maturity assessment, called QuickSAMM (based on OpenSAMM) and application security consulting to cover other areas.
SecuritySolutionsWatch.com:
Can we drill down a bit further into HP’s Comprehensive Applications Threat Analysis (CATA). We read with great interest that “Testing for security vulnerabilities happens late in the development lifecycle. At this stage, only a limited number of security defects can be found. And when found, they're expensive to fix—up to 100 times more expensive or more than if found in the requirements phase! It's better to build security into applications from the ground up—design it in rather than relying on testing alone.” Care to elaborate on the real value to our audience is?
John Diamant:
Certainly. The typical approach is to design quality in and test it out—meaning to include it in all phases of development. This was first widely disseminated by Edwards Deming back in the 1950s. Though it took decades to make it to the U.S. and to the software development industry (it was widely established in software during the 1980s), it is now well established. But it is still barely present in cybersecurity. Today we still focus most cybersecurity quality effort on patching (post-release) and testing (at the end of the lifecycle) where it’s most expensive, due to the cost of all the rework and the damage which can result from exploited vulnerabilities. The earlier we can avoid, find and fix vulnerabilities the less rework, and the less risk involved. Less rework because we don’t have to go back to make architectural and design changes just before release, or we can build in resilient architecture and design which dramatically reduce the risk of vulnerabilities. Less risk because if we avoid, find, and fix vulnerabilities early and throughout the lifecycle, there’s a manageable number to deal with during security testing. Otherwise, the odds of catching most or all vulnerabilities in testing are very poor due to the overwhelming number of them being introduced and the inherent poor security quality which results.
SecuritySolutionsWatch.com:
John, in your opinion, at the end of the day, what is the unique value proposition that HP delivers to Public Sector organizations and what is the journey, the road map if you will, and the benefits that public sector organizations will realize in taking that journey with you?
John Diamant:
HP has been perfecting our processes for developing and securing our software for decades. CATA for instance, is a process we started developing (and have continuously improved) for more than a decade. In that time, we’ve applied it countless hundreds of times and avoided, found and fixed countless thousands of vulnerabilities. We’ve also been developing some resilient software for years and decades—for instance, fault tolerant systems, with HP’s NonStop (formerly Tandem), and much of the world’s Automated Teller Machines with HP Atalla. We provide support for both small and large U.S. Public Sector’s clients. We know how insecure software can be and we know how to secure it effectively, and to increase its resilience.
|