“It sounds sort of scary,” says Sam Srinivas, referring to the new approach to cyber risk management he’s helped develop at Google. “But it’s even more secure.”
Hit by a wave of phishing attacks around 2009, the internet giant decided that, rather than merely bolster its firewalls and construct new virtual private networks (VPNs), it would hatch a new way of protecting its employees, clients, and assets. The idea was radical: eliminate the long-held belief that the essence of cybersecurity is to protect the “perimeter” of the company — the border around its actual physical location and intranets.
Instead, the company set out to develop BeyondCorp, a cloud-based cyber solution that authenticates every user and device and every computer and mobile caller attempting to gain access to the Google network, no matter where the user is. The scary part for some observers is the approach involves placing sign-in access to a company’s networks out on the internet, rather than keeping its data within the confines of its perimeter, acknowledges Srinivas, a Google product management director.
Yet the concept known as “zero trust” — a term said to be coined by Forrester Research — involves a more precise and extensive form of cybersecurity than reliance on firewalls and VPNs has provided. Essentially, the approach assesses each risk that emerges when a user attempts to tap into a company’s network. In a sense, the system builds its own security “perimeter” in response to each log-in.
Google and other advocates of zero-trust cybersecurity claim it’s virtually airtight against phishing attacks and other forms of unauthorized access, and it provides much more flexibility around who may be permitted access to specific data assets. But the cost and work involved in making the transition from conventional risk assessment to the new framework can be prohibitive.
Nevertheless, the realization that cybersecurity needs to change to a more agile model is taking hold at organizations both large and small. Most significantly, it’s part of a shift in the paradigm of how companies assess the risk of data breaches.
Trust No One
Instead of looking at their exposures as a monolith demanding a uniform, fortress-like defense, corporate risk managers and information technology officers are breaking down their risks into segments. The segments are based on the kinds of data their companies retain or the kinds of people they employ. Ultimately, they hope to develop cyber defenses that, like BeyondCorp, can assess the risk represented at the point of every single sign-in to their networks.
While Google and other companies still use firewalls as a basic form of defense against botnets and other automated attacks, the BeyondCorp strategy focuses on blocking individualized attacks, especially phishing assaults, wherever they occur. Like the guiding principle of agents Scully and Mulder on “The X-Files,” that of the developers can be summed up by the phrase “trust no one.” Rather than allowing access to a company’s private network through a limited number of “gates,” as firewalls do, zero-trust systems assume anyone might be a hacker.
Kees Leune, the chief information security officer and a computer science professor at Adelphi University, explains the new framework well. Google and other companies pursuing the strategy “acknowledge the fact that in the modern organization, most employees are mobile, and they’re not necessarily at headquarters in that cubicle behind the firewall. They are with their laptop, on the move, at an airport, hotel, local coffee shop, or working from home as telecommuters.”
In such situations, firewalls, which only protect data in a defined geographic location separate from the “big, bad internet,” are essentially useless, Leune says. With zero trust, regardless of whether the user is trying to sign into the system from a corporate branch office or from Starbucks, “you’re still going to be treated as untrustworthy, until you have proven to the system each and every time that you are trustworthy,” the Adelphi CISO adds. The system’s assessment is triggered when users enter their names and the unique lines of code sent to them by the company.
Baked-in Assessments
Educational Testing Service (ETS), a large, 70-year-old non-profit organization, administers the Graduate Records Exam, scores the SAT, and develops other well-known academic tests. Its risk assessment planning has taken a new turn to match the accelerating pace of change in how business is done. “It no longer makes sense to do an annual, big, heavy-lift security assessment. [Risk assessment] has to be something that’s baked into what you do,” says Julie Cain, ETS’s strategic adviser for IT risk management.
Previously, when Cain’s department assessed a risk in the company’s IT platforms, such as a third-party search engine provider, the organization’s risk managers had to jump through bureaucratic hoops. Proceeding through a battery of technology, network connectivity, and other tests, the assessment couldn’t proceed until the search engine was deemed adequately risk-free in each category. “You couldn’t go forward until you had passed the assessment and the [ETS] security organization blessed whatever it was that you were going to do,” recalls Cain.
That system has been replaced by what she sees as a more agile environment. “Things happen so rapidly and on so many levels at the same time that you really can’t have these monolithic gates anymore,” she says. They stall the risk assessment of processes like procuring IT security products, contracting for outside services, and internal risk management development.
In place of the gates, ETS’s security organization equips employees and managers who work outside the security team with frameworks, checklists, and a method of ranking the severity of risks so they can do their own risk assessments. At the beginning of a project, non-IT people can “quickly determine the risk level, and that risk level determines what type of assessment and what type of vetting needs to occur,” Cain explains.
The company’s major risks stem from its handling of two forms of highly desirable information: the test scores of famous people and the answers to tests about to be taken. “People want to know what Trump’s SAT score was, right?” she asks.
Foreign students who want to attend U.S. universities have a much bigger incentive to access ETS’s systems. They want to steal answers from upcoming English-as-a-second-language exams. For some of those students, passing the required tests and getting into college may be a way to “really raise [a] family out of poverty,” she notes.
Considering its widespread data security risks, ETS would be a prime candidate for a beyond-perimeter approach. After all, the organization develops, administers, and scores more than 50 million tests a year in more than 180 countries at more than 9,000 locations worldwide.
But it’s not there yet. While the company is in the midst of migrating to the cloud—a prerequisite for a zero-trust approach — some of its risk resides behind an on-premises firewall. And the cost of going full-on into the new framework would be prohibitive.
Still, says Cain, “every decision that we are making is being informed by where we want to be” — which, ultimately, is beyond the perimeter.
Stakeholder Risks
Unlike ETS, which focuses on threats to its cache of information, Adelphi University’s cyber-risk assessments revolve around human vulnerabilities. The university has three risk assessment processes, each addressing a different group of stakeholders: administrative staff, students, and faculty.
Risk assessment of the administrative staff is easiest and most routine. “We can work with executive leadership fairly effectively in getting technology choices influenced, decisions made, and policies written and enforced,” Leune says.
With staff, the information security department holds the most sway. In ongoing assessments, Leune probes what kind of information the administration has in-house that would be valuable to outsiders.
Things get stickier when it comes to judging the cyber risks incurred by the student body and campus visitors. Much like an internet service provider, the university runs an open-access network for resident students and guests. Visitors “expect to have pretty smooth internet access without many limitations,” Leune says.
“While I can tell an employee what to do, I cannot tell a student what to do [to prevent cyber] risks,” he adds. That’s true for two reasons. One, the university provides the service as nothing more than a utility over which it has no control. Two, “they’re bringing in their own devices anyway,” Leune explains.
As uncontrollable a risk as that sounds, the university’s exposure to breaches of faculty networks is even worse. Under the aegis of academic freedom, faculty members can pursue their interests in whatever way they feel is most appropriate. “As a result, while we can put some barriers up, if those barriers are going to actively interfere with their ability to do research or teaching, then we are not doing our job well,” Leune says.
Because managing these differing cyber risks is a complex matter itself, the university is better suited to a simpler, firewall-based approach to cybersecurity than to a more complex zero-trust approach, according to Leune. “The aim of perimeter-based protection is that you have one single point of control and one single point of complexity,” he says. “With BeyondCorp, every node has its own perimeter and its own management infrastructure. Just that alone makes it more complex.”
Cloud Concerns
Besides cost and complexity, the fact that the zero-trust approach resides in the cloud causes hesitation among some managers who might otherwise deploy it. “One of the biggest concerns for all risk managers these days is … the aggregation of information that’s in the cloud,” says Kristen Peed, director of corporate risk management for CBIZ, a management services company. “If your stuff is in the cloud, and the cloud is hacked, you’re still responsible for it.”
To Dannie Combs, the CISO at Donnelley Financial Solutions, the switch to cloud-based solutions represents a loss of control over cybersecurity. “You are operating in a multi-tenant environment, and you’re relying very heavily on a third party like Amazon or Rackspace—the list is growing every day — to provide you with the basic ‘blocking and tackling’ of risk management,” he says.
“You don’t control that perimeter on-premises in a way that was common practice just a few years ago,” Combs adds.
Srinivas says midsize companies whose information resides in the cloud to begin with have an easier path to adoption than bigger companies in the process of cloud migration.
“Larger companies with lots of legacy systems need more thought and planning for BeyondCorp,” Srinivas says, “simply because their IT setup is more complicated.”
Beyond the Perimeter
Moving to a context-based security approach requires in-depth understanding of people and devices.
How do you build your trust model around people and devices as opposed to networks? In a blog post, Max Saltonsall, technical director in the office of the CTO at Google, described some of the best practices. For one, to move from a “privileged” corporate network (with a VPN at its core) to a zero-trust one, the company has to know both its people and its devices. Using a VPN to stream Netflix isn’t illegal. The streaming service may not like it, but there’s no law forbidding you from doing this.
“Without a reliable, up-to-date set of data about the people and the machines, we can’t make good decisions about access,” he wrote.
Google, according to Saltonstall, had to reshape its array of job classifications “so [it] could more accurately capture what people were actually doing day to day, and what sort of access they required by role.”
As important as people are in the scheme, devices are equally so. Timely, accurate information on the devices employees were using was critical. “We do not want to extend trust to a person, only to have them use an infected device and inadvertently share the information they have access to with an attacker,” Saltonstall wrote.
To ensure accuracy, Google had to centralize the information from multiple systems that were tracking device inventory — asset management tools, DHCP servers, wireless access logs, and tech support systems among them. “Creating this new master inventory service took a large investment of time and effort, but has given us a much more unified look at our multitude of devices,” according to Saltonstall. With a better picture of what workers are using, Google can make “trust evaluations” based on installed software, patch level, and other characteristics. | Vincent Ryan