Despite having some of the highest cyber security standards in the nation, we learned just last week that the National Security Agency (NSA) had been hacked. A group calling themselves the “ShadowBrokers” were able to gain access to information regarding powerful NSA espionage tools. “The stuff you’re talking about would undermine the security of a lot of major government and corporate networks both here and abroad,” one former NSA staffer who worked in the agency’s anti-hacking division told the Washington Post, saying the breach illustrates the evolving ability of hackers seeking sensitive information to access “the keys to the kingdom.”
While cyber attacks have become somewhat commonplace in recent years, with large corporations such as Target and Wendy’s falling victim to online hackers, this recent breach at the NSA — the agency in charge of security itself — has proven truly alarming. If the NSA can get hacked, is anyone safe? According to Dr. Mark Heckman, professor of practice, cyber programs and Center for Cyber Security Engineering and Technology at the University of San Diego, the answer is no. “The NSA supposedly has the best security in the world. And yet they were just hacked. That tells us there is something fundamentally wrong with the way security is practiced.”
But it’s not just the NSA and large retailers that are dealing with hackers. Today our society is increasingly reliant on technology, which means that hackers have a multitude of expanding opportunities to gain access to personal and proprietary information — be it through your in-home NEST thermostat, your electronic health records (EHRs) or your cell phone just to name a few. And what most consumers don’t realize is that when the systems and software that we use and rely on every day are built, security is an afterthought. The security standards one might assume are in place during development are not. Instead, today’s cyber security standards are based on a simple checklist of best practices. However, this was not always the case.
The 1970s: The Start of the Digital Era and Tiger Teams
During the 1970s, when computers were still new and the digital revolution had just begun, penetration testing was the main means of ensuring a system was secure. “Red teams” as they were known, or “tiger teams,” would attempt to break into a system. If they couldn’t break in, they would deem the system secure. If they were able to break in, a developer would go in and fix the holes. And this cycle would continue, with tiger teams continuously finding holes and developers going in to patch them.
Realizing that penetration testing with tiger teams was not a completely effective method, the government understood that it needed to create a better system for evaluating security, at least when it came to evaluating systems and software used for governmental functions.
The 1980s-1990s: The Trusted Computer System Evaluation Criteria (aka, the Orange Book)
In the early 80s, the U.S. government created what was known as the Trusted Computer System Evaluation Criteria (TCSEC), otherwise known as the Orange Book. Although adherence to this criteria was only required for governmental software and systems, the hope was that the private and public sectors would take notice and follow the government’s lead in implementing TCSEC. Unfortunately that never happened. TCSEC was extremely rigorous and extremely effective, but it was time consuming and expensive to implement. Especially problematic was that by the time a computer, for example, was built and deemed secure, it could not run many of the software programs that businesses had come to rely upon, such as Microsoft Office. And yet, these were probably the most secure systems ever created. Many of these class A1 systems were used for years by the Department of Defense and never once needed a security patch. “The TCSEC evaluation criteria does work, but it is very expensive and slow,” explained Dr. Heckman. “It would make sense to apply TCSEC in secure infrastructure. For example, the power industry has systems that are installed in remote locations that run for 30 years. Due to these systems’ longevity and their importance, TCSEC makes sense.”
Meanwhile in Europe, a similar standard had been adopted with one major exception: Security functionality and security assurance were separated. Under the U.S. cyber security standards functionality and assurance were dependent upon each other, meaning that the more functionality a system possessed, the more assurance it was required to have. In Europe, however, the cyber security standards allowed for a system with high functionality and little assurance. The unraveling of functionality and assurance was a huge divergence from the U.S. standards but allowed Europe to bring systems to market faster and not be bogged down by years of security evaluation.
The 2000s: Moving to The Common Criteria
In 2002, the TCSEC standard that had been so successful in creating secure government systems was canceled. It was too time exhaustive and costly. And as it appeared, security was not what people cared about. Time to market, features and cost was what really mattered to businesses and consumers. In place of TCSEC, the U.S. and Europe collaborated to form the Common Criteria. Under this new standard, each country would agree to accept evaluation assurance levels up to level 4, beyond that countries would have to do their own security due diligence. This would make it easier to sell systems cross border with an assurance that those systems were secure, or as secure as their evaluation assurance level (EAL) rating implied. Assurance levels under the Common Criteria, however, ran from EAL1 to EAL7, with EAL7 being the highest level of security and EAL1 being the lowest. As a result of the agreement to accept systems up to EAL4, a market was created in which the majority of systems were not built above EAL4.
Current Day: Best Practices
Today, the Common Criteria is largely going the way of the TCSEC standards. While it is not completely defunct, it is dying, as many businesses are now reliant upon a checklist of best practices. Systems are compared to a compliance checklist and deemed secure based on the ability to check all the boxes. As Dr. Heckman said, “Anyone in security knows that compliance doesn’t make you secure. There is no scientific foundation underneath.”
As cyber crime has intensified over time and cyber criminals have become increasingly sophisticated, cyber security standards have become more and more lax. It seems to defy logic that as the threat intensifies the security standards would lessen. And yet that is what has happened, as dollars have driven a market to prioritize development time, cost and features. But as the latest major cyber attack, the hack of the NSA, has proven, this approach could be dangerously wrong.
Dr. Heckman, recognizing the importance that security must play in system development, especially in today’s hyper-connected and technology-driven world, helped to launch two unique Master of Science degrees at the University of San Diego: the Master of Science in Cyber Security Engineering and the 100% online Master of Science in Cyber Security Operations and Leadership. To learn more about either of these first-of-their-kind graduate programs contact an admissions representative.