Agencies in the federal government are working to develop tools and software that would automate cybersecurity – essentially, an effort to remove human error from the equation. A new report out by NextGov details the automation effort, and why these tools aren’t yet ready for government-wide deployment.
Much of the cybersecurity efforts in government currently, revolve around detection, mitigation and defense – this new technology would enable cybersecurity to be baked into the software being used by agencies, therefore more secure by nature.
However, with budget constraints and acquisition restrictions, the ability for agencies to acquire and implement automated cybersecurity, is limited.
“Agencies are generally undermanned and undertrained and many are not using the tools they have to their full capability,” former federal Chief Information Security Officer Gregory Touhill said in the report. “Fixing that would be a better return on investment at this point, rather than introducing something that adds complexity.”
The Defense Advanced Research Projects Agency hosted an event last year in Las Vegas, the Cyber Grand Challenge, which was part of a larger push to move away from a landscape where software is highly vulnerable by nature, and end the constant cycle of cat-and-mouse patching with hackers, according to report author Joseph Marks.
DARPA, and competitors in the Cyber Grand Challenge are aiming to jumpstart the move away from a world where viruses and malware can hide undetected for years in a computer system, to one where they are discovered in a matter of weeks, days or even seconds, according to NASA astrophysicist, and announcer of the event, Hakeem Oluseyi.
The goal is to raise the barrier to entry for hackers, and for companies to spend less money on constant cyber monitoring and defense. An extreme example of their automated cybersecurity is called “formal methods” which uses automated tools to eliminate imprecise components of a software code that could be exploited to the point a researcher can mathematically prove the software is vulnerable to certain classes of vulnerabilities, according to the report.
Specialized industries, like aerospace, benefit most from a formal methods review because in them the software performs discrete tasks and is highly critical, according to Lee Pike, research lead for the cyber firm Galois.
“There are systems that are tens of thousands or hundreds of thousands of lines of code,” Pike said. “If we’re talking about the entire software stack for the Joint Strike Fighter, you can’t just throw a verification engine at that and expect it to tell you it’s all secure.”
However, in government, there have not been many success stories – the National Security Agency has invested in formal methods research for over 10 years, without a lot of return, said Curtis Dukes, a former deputy national manager for national security systems at NSA, and currently executive vice president at the Center for Internet Security.
“Fuzzing” tools were another method deployed by teams in the Cyber Grand Challenge – they run a series of random commands against different swaths of the software to see if any of them turn up unexpected or unwanted results. Fuzzing is used in much of government penetration testing, according to Touhill and Dukes. That process though still calls for a good deal of human interaction, such as a security professional who points the fuzzing tool at the most critical components of a system and decides which ones really need fixing
For the majority of agencies, current efforts are focused mainly on ensuring cyber defense tools are used as intended and making sure employees are exercising good cyber hygiene.
“There are some bigger agencies that have greater funding that are exploring greater
investments in fuzzing and related tools and doing some minor pilots,” Touhill said. “But before we go chasing the laser pointer mark on the floor with fuzzing, I’d like to see departments and agencies actually use what they’ve already bought and use it better.”