We're learning more about how that enormous cache of telephone metadata at the National Security Agency is actually used. According to two sources in the intelligence community who have worked with the system, it's one of many tools available to analysts working on terrorism investigations or providing intelligence for military forces overseas.
One former defense intelligence employee describes it this way:
The NSA makes a list of names and/or phone numbers available to analysts who are cleared to use the meta database. These names and/or numbers have been obtained by NSA through other collection programs, presumably legal ones. The analysts input those names and/or numbers to the meta database, which will then show any connections to phone numbers in it.
The meta database itself doesn't contain any names--it is only phone numbers. If there's a number that's based in the United States, the analyst only sees an "X" mark. If he wants to see the number underneath that X, he has to get clearance from higher authority--in this source's experience that was the general counsel of the organization where he worked.
The source also said the tool wasn't particularly useful. It's there to help analysts better understand the links between potential terrorists, and to help identify them. But the analyst said that searching for numbers and names on Google often led to better results.
An intelligence official who has used the meta dabtase confirmed the description of how it works. But he said the presence of so many "innocent" numbers in the system posed a challenge. Analysts have to weed through them to find only the numbers they're allowed to see without permission from a higher authority.
The meta database is one of dozens of different systems or intelligence streams available across the intelligence agencies. From the sources' descriptions, it sounds relatively mundane compared to the other tools that are available.
However, one of those other tools, which was revealed yesterday by the Guardian and the Washington Post, called PRISM, appears far more secretive and less widely used. Neither of these source had ever heard of it. The former defense intelligence employee expressed alarm that, according to reports, the system gives the NSA direct access to the central servers of some of the country's biggest Internet companies, including Facebook and Yahoo!, and then lets analysts obtain e-mails, video and audio files, photographs, and documents.
This individual said he couldn't explain how, based on his training and experience, the PRISM system complies with the law. It doesn't seem to be discriminate enough in separating US person's content--such as their e-mails--from those of foreigners. The government almost always needs a warrant to look at content.
Reportedly, PRISM sweeps up the information of US persons when analysts tap into those central servers. They're instructed to document these "incidental collections," but are told, according to a training manual reviewed by the Post, that "it's nothing to worry about" if Americans are caught up in the stream.
The intelligence official said that based on reports, the PRISM system would have to be collecting massive amounts of information, and that NSA was likely the only agency with the computing power and the storage space to handle it all. The agency has been running out of electronic storage at its Ft. Meade, Md., headquarters and has built a new 1-million square foot data center in the Utah desert.
Multiple officials are now confirming that the National Security Agency's practice of collecting all telephone metadata from Verizon, as first reported by the Guardian, is part of a program that has been active for years. A US intelligence official tells me that orders of the kind delivered to Verizon in April are routine. Sen. Dianne Feinstein said today that the collection of metadata from phone companies is a seven-year-old practice. And an unnamed source told the Washington Post that the order appears to be similar to one first issued by the Foreign Intelligence Surveillance Court in 2006, and that it is “reissued routinely every 90 days” and not related to any particular government investigation.
Here’s what else we know so far about this massive intelligence collection program, a few things we might infer, and some big unanswered questions.
What is the government doing with all this phone metadata?
According to a senior administration official, “Information of the sort described in the Guardian article has been a critical tool in protecting the nation from terrorist threats to the United States, as it allows counterterrorism personnel to discover whether known or suspected terrorists have been in contact with other persons who may be engaged in terrorist activities, particularly people located inside the United States.”
This is a description of standard link analysis. Say the government obtains the phone number for a suspected terrorist. It then runs that number against the huge metadatabase. If there’s a match, presumably the government then obtains some other authority to find out who the number in the metadatabase belongs to; according to the court order, and the administration official, the metadata does not contain the names of phone subscribers. It’s just phone numbers, lengths of calls, and other associated data that’s not considered “content.”
What can you learn with metadata but no content?
A lot. In fact, telephone metadata can be more useful than the words spoken on the phone call. Starting with just one target’s phone number, analysts construct a social network. They can see who the target talks to most often. They can discern if he’s trying to obscure who he knows in the way he makes a call; the target calls one number, say, hangs up, and then within second someone calls the target from a different number. With metadata, you can also determine someone's location, both through physical landlines or, more often, by collecting cell phone tower data to locate and track him. Metadata is also useful for trying to track suspects that use multiple phones or disposable phones. For more on how instructive metadata can be, read this.
Where is all that metadata being stored?
According to the court order, at the National Security Agency. The electronic spying agency is headquartered in Ft. Meade, Md. But it has been running out of digital storage space there, as well as electricity to keep all its systems up and running. The NSA has built a new facility in the Utah desert, called, appropriately, the Utah Data Center. And it recently broke ground on another facility at Ft. Meade.
How does that data get from the phone companies to the NSA?
We still know little about the physical infrastructure that transmits the metadata. But we do know, from the order, that Verizon is sending the information to the NSA “on an ongoing daily basis.” That’s an extraordinary amount of information considering it covers millions of customers making multiple calls a day. In simple terms, we’re talking about a lot of pipes and cables leading from Verizon locations—like switching stations—to NSA facilities. We know from a whistleblower at AT&T that surveillance equipment was set up at the company’s offices in San Francisco as part of the NSA’s efforts to monitor terrorists after the 9/11 attacks.
What else might the NSA or other government agencies be doing with this metadata?
As I wrote in my book, The Watchers, the NSA has long been interested in trying to find unknown threats in very big data sets. You’ll hear this called “data mining” or “pattern analysis.” This is fundamentally a different kind of analysis than what I described above where the government takes a known suspect’s phone number and looks for connections in the big metadatabase.
In pattern analysis, the NSA doesn’t know who the bad guy is. Analysts look at that huge body of information and try to establish patterns of activity that are associated with terrorist plotting. Or that they think are associated with terrorist plotting.
The NSA spent years developing very complicated software to do this, and met with decidedly mixed results. One such invention was a graphing program that plotted thousands upon thousands of pieces of information and looked for relationships among them. Critics called the system the BAG, which stood for “the big ass graph.” For data geeks, this was cutting edge stuff. But for investigators, or for intelligence officials who were trying to target terrorist overseas, it wasn’t very useful. It produced lots of potentially interesting connections, but no definitive answers as to who were the bad guys. As one former high-level CIA officer involved in the agency’s drone program told me, “I don’t need [a big graph]. I just need to know whose ass to put a Hellfire missile on.”
How big a database do you need to store all this metadata?
A very, very big one. And lots of them. That facility in Utah has 1 million square feet of storage space.
But just storing the data isn’t enough. The NSA wants a way to manipulate it and analyze it in close to real-time. Back in 2004, the agency began building “in-memory” databases, which were different than traditional databases that stored information on disks. In-memory was built entirely with RAM, which allows a computer to hold data in storage and make it ready for use at an instant. With disks, the computer has to physically go find the data, retrieve it, and then bring it into a program. If you’re trying to analyze entire telephone networks at once—and that is precisely what the NSA wanted to do—a disk-based system will be too slow. But the NSA’s in-memory databases could perform analytical tasks on huge data sets in just a few seconds.
The NSA poured oceans of telephone metadata into the in-memory systems in the hopes of building a real-time terrorist tracker. It was an unprecedented move for an organization of the NSA’s size, and it was extremely expensive.
That was 2004. The court orders issued to Verizon, we’re told, go back to as early as 2006. It appears that the NSA has had an uninterrupted stream of metadata for at least seven years. But the agency was getting access almost immediately after 9/11. That could mean there’s more than a decade’s worth of phone records stored at the NSA’s facilities.
In a rare public appearance, a senior intelligence official who has worked on the front lines of securing Defense Departments computer networks said it would be "almost immoral" for the DOD to focus on protecting itself and not apply that expertise to the commercial sector.
Speaking at a conference in Washington on Tuesday, Charles Berlin, the Director of the National Security Operations Center at the National Security Agency, said, "The mission of the Department of Defense" is not merely to protect the department. "It's to protect America."
"I've been on the ramparts pouring boiling oil on the attackers for years," Berlin said, referring to NSA's efforts to repel intrusions into DOD and military networks, which have been broadly successful. But he sounded frustrated that there weren't more ways for his agency to protect the country as a whole. "At the present time, we're unable to defend America," Berlin said.
The operations center that Berlin runs is the heart of the NSA's efforts to provide early warning about threats, including to information networks. Berlin said the NSA was looking for ways to take the skills it has developed in the government and "apply [them] to the private sector."
But many executives, as well as lawmakers and privacy advocates, are uneasy about the NSA, which is a military organization that spies on foreign countries and terrorists, taking on a larger role protecting private networks inside the United States.
Currently, the Homeland Security Department, a civilian agency, has the legal authority to provide companies with warnings about cyber attacks. But much of that intelligence comes from the NSA. The agency does not work directly with all American companies. And yet, it is undoubtedly the reservoir of expertise in government for how to defend networks from potentially devastating assaults. Of particular concern to the Obama administration are threats against critical infrastructure, such as public utilities and the financial sector networks, as well as industrial espionage by hackers in China.
"There needs to be a team effort" to protect private networks, Berlin said. He noted that the NSA had been invited to examine the networks of some companies and "found some appalling things" in how they were being run. For example, Berlin said he knew of US defense contractors doing business in China and Korea that had not taken relatively easy and practical steps to raise the defenses of their networks and protect proprietary information. That's troubling to the NSA since defense contractors have secret government information on their networks, which makes them a frequent target of cyber spies.
Berlin spoke at a conference sponsored by SAS, a business analytics software and services company.
The United Kingdom is embarking on a national program to train the next generation of cyber warriors to protect the country's infrastructure.
From the Guardian:
"The UK is now so short of experts in cybersecurity, they could soon command footballers' salaries... Ministers support plans for a national competition for schools in the hope of encouraging teenagers, especially girls, to become so-called "cyber Jedi"--defending firms, banks and government departments from an ever increasing number of online attacks."
Two thousand schools will participate in a pilot project beginning in September, as part of Cyber Security Challenge UK, the Guardian reports. Then, the program would roll out across England and Wales.
Stephanie Daman, the group's director, tells the newspaper, "Kids need to know there is a real career in this, because they have no concept at the moment. And we need to spark their interest. It's a profession like law or accountancy, with well-paid salaries.
"A lot of companies are desperate to hire people for the roles in cybersecurity, but they have not been able to find the number of qualified recruits. There is a huge gap in terms of the number of properly qualified people in this area, and we need to tap into talent we know is out there."
In a sign of how seriously the government takes that shortfall, Michael Gove, the UK education secretary, recently "ripped up" school IT curriculum "in part because it does not have a cybersecurity element," according to the Guardian.
There's a similar and growing effort on this side of the pond to train the next generation of "cyber ninjas," as some involved in the effort like to call them. High schools have teamed up with technology advocacy groups to recruit more young students into college computer science programs, with an eye towards working in the cyber security industry. Rhode Island congressman Jim Langevin, for instance, has organized high-school hacker competitions in his state.
In December, the SANS Institute, which trains military and intelligence personnel in the cyber arts, sponsored an international cyber competition at the Washington Hilton. A group of high schoolers were selected to compete against the world's best hackers in the early rounds.
The National Security Agency also sponsors a nation-wide contest in which teams from the military service academies face off against some of the NSA's best cyber warriors. Cadets at the Air Force Academy, which now has a separate educational track for cyber warfare, recently took first place.
As in the UK, there aren't enough people in the workforce right now with the high-level of skill that the US government demands, hence many of these efforts to go down to the roots of the education system. But you're going to see this demand coming more from the private sector, as financial services companies, utilities, media organizations and others increasingly find themselves the targets of malicious hackers and are virtually powerless to do anything about it. They're not going to wait around for the government to protect them. They'll hire their own cyber armies to do that job.
In 2006, as the war in Iraq was reaching a fever pitch, a Pentagon employee working on special operations teamed up with a Czech technology entrepreneur who had dabbled in the porn business and devised what they considered an ingenious plan. Knowing that video games played on mobile phones were popular throughout the Middle East, the team wanted to build games that contained positive messages about the United States. But the games weren't just about propaganda. Every download would give the United States a window into the digital comings and goings of whomever was playing it it, a cyber foothold that could allow American spies to potentially track and collect information on thousands of people.
The propaganda/spy campaign was dubbed Native Echo, and it was conceived by Michael Furlong, a colorful civilian employee working for US Special Operations Command, and a company called U-Turn, which was headquartered in Prague and founded by a pro-American Czech national named Jan Obrman, whose parents had fled the Soviets in the 1960s. The idea was to target Middle Eastern teenagers in "high risk/unfriendly areas," and over time to integrate the US messages "into the lifestyle of the targets," ideally to make them more amenable to US armed forces, and to counter the rhetoric of Muslim fundamentalists.
The full account of this previously unreported intelligence operation is found in the new book The Way of the Knife: The CIA, a Secret Army, and a War at the Ends of the Earth, by New York Times national security correspondent Mark Mazzetti. The book explores the ways in which the CIA--which before 9/11 had long been out of the business of killing people--and the US military--which had not been the domain of spies--have often changed roles over the past decade. It is filled with characters, like Furlong, who move between the membranes of these two worlds, and find themselves at home in either one.
Mazzetti writes that the first mobile game developed for Native Echo was modeled on the popular Call of Duty series. This new "shooter" game, Iraqi Hero, "took the player on an odyssey through the streets of Baghdad, shooting up insurgents trying to kill civilians in a wave of terrorist attacks," Mazzetti writes. "The goal was to reach an Iraqi police station and deliver the secret plans for an upcoming insurgent attack, plans that had been stolen from a militia group's headquarters."
Native Echo was timed to coincide with the US troop surge in Iraq in 2007. Its "main focus was on combatting the flood of foreign fighters entering Iraq from Yemen, Syria, Saudi Arabia, and parts of North Africa," Mazzetti writes.
As an intelligence collection program, Native Echo was both broad and audacious:
"Thousands of people would be sending their mobile-phone numbers and other identifying information to U-Turn, and that information could be stored in military databases and used for complex data-mining operations carried out by the National Security Agency and other intelligence agencies. The spies wouldn't have to go hunting for information; it would come to them."
In order to hide the US role in the scheme, "Furlong convinced [U-Turn's] executives to create an offshore company that could receive Pentagon contracts but not be tied directly to the United States," Mazzetti writes. Obrman set up JD Media Transmission Systems, LLC, incorporated in the Seychelles Islands, in order to receive money transfers from the US through a foreign bank account.
Furlong was a master at working the byzantine procurement bureaucracy to further his covert plans. "Taking advantage of a law that allows firms owned by Native Americas to get a leg up when bidding on government contracts, Furlong arranged for U-Turn to partner with Wyandotte Net Tel, a firm located on a tiny speck of tribal lands in eastern Oklahoma," Mazzetti writes.
U-Turn developed two more games for Native Echo--Oil Tycoon, which challenged players to protect vital pipelines and infrastructure, and City Mayor, in which players became urban planners and rebuilt a fictional city destroyed by terrorists. The team came up with various ways to distribute the games, including by hand via memory cards, which could be sold or given away in markets and bazaars, Mazzetti reports. "The way to get far wider distribution, however, was to post the games on Web sites and blogs frequented by gamers in the Middle East. This allowed [Special Operations Command] to monitor how many people were downloading the games and, more important, who was doing it."
Mazzetti concludes that it's hard to know how far Native Echo went, and even how many companies like U-Turn were hired to create propaganda for the military. Furlong came up with other wild ideas, some of which were never approved. But the relationship between the military and U-Turn blossomed, and it offers a concrete illustration of how the armed forces evolved into a network of spies.
The Way of the Knife is full of stories like this, of people living on the edge between two worlds, frequently not sure how to operate on turf that had long been forbidden. The book is a culmination of Mazzetti's years of reporting on the intersections of the military and the CIA, and it is a forceful, compelling articulation of a new way of war. Mazzetti's reporting has been among some of the most important, in that it has shed light on usually hidden practices, particularly the use of brutal interrogations on terrorist detainees. As the book unfolds, we see how the 9/11 attacks shake the CIA out of their Cold War culture of espionage, and turn the agency into a highly-efficient global killing force.
I spoke with Mazzetti yesterday as he was heading off to New York to begin a book tour. He said that he began working after the raid that killed Osama bin Laden, and that the first few months of writing were filled with some anxiety, since his journalism beat was now the hottest around. Lots of his competitors were writing books and long magazine articles about the raid. But Mazzetti said that he wanted to write something broader, to show how the long arc of the war on terror has fundamentally changed how the US fights.
"I covered the Pentagon for five years, and then I have been covering the intelligence world since 2006," Mazzetti said. "And really, I realized that I was kind of covering the same beat. The lines that existed before 9/11, where the military did this and the spies did that, really have blurred."
Mazzetti said he's glad to be back at the Times after a 15-month book leave. He had missed the collegiality of an office. Writing a book is solitary business. But in the midst of the project, Mazzetti and his wife, Lindsay, welcomed Max, their first child.
"I can't wait until he is old enough to read this book," Mazzetti writes in his acknowledgments. "I cherish the memories of the mornings we spent together during the first few months, and of the smiles he delivered when I came home at the end of particularly frustrating days of book writing. They put things in perspective."
A newly declassified issue of a technical journal published by the National Security Agency opens a fascinating window into how the United States first started to grapple with the complexities, the risks, and the potential advantages of cyber warfare.
The journal was published in the spring of 1997, shortly after the NSA was delegated by the Defense Secretary to develop new computer network attack techniques, defined then as "operations to disrupt, deny, degrate, or destroy information resident in computers or computer networks, or the computers and networks themselves." This was "information warfare " as practitioners then called it. And NSA's earliest cyber warriors saw themselves on the cusp of a momentous undertaking, one for which even the agency's own technology savvy workforce was not completely prepared.
"We are on the edge of a new age, called the 'Information Age,'" writes Bill Black, then the NSA Director's special assistant for information warfare. It was "engulfing almost every aspect of society, including the very nature of our business"--spying on other governments' and intercepting electronic communications.
This didn't exactly catch the NSA by surprise; it was, and still is, the home to some of the most brilliant computer scientists the world has ever known. But perhaps because the agency understood so well the potential of technology, it knew better than most how computer networks and the increasingly integrated digital world could be exploited for strategic advantage, both by the United States and its adversaries.
In one especially prescient article in the journal, the author (his or her name has been redacted) writes about the potential of computer network attacks for "destroying enemy power facilities."
"In previous conflicts, if you wished to destroy or disable an economic/industrial target, you needed to place ordnance on it." But information warfare was "making possible infinitely scalable, infinitely accurate strikes on infrastructure targets by means of cyber-attacks on the information infrastructure needed to operate it."
This wasn't theoretical speculation. According to former military intelligence officers I spoke to when researching my book, by the late 1990s the Army was practicing for information warfare in military operations, and researchers were actively looking, as one former officer put it, "for ways to knock out the lights in Tehran," meaning a cyber attack on electrical power facilites in Iran.
Today, a cyber attack on a US power facility is the nightmare scenario many officials use to highlight the urgency of raising America's cyber defenses. "We know that cyber intruders have probed our electrical grid and that in other countries cyber attacks have plunged entire cities into darkness," President Obama said in May 2009, when he announced that his administration would devote more attention to securing cyberspace.
The NSA journal also shows that America's early cyber warriors were building up an arsenal of sorts. Any cyber attack must be directed at a vulnerability in a network, a piece of software, or a device. These are the back doors and security gaps that allow a cyber intruder to get into a system, preferably undetected. The NSA was tracking these vulnerabilities, and apparently hoarding them in secret.
"One unofficial survey within NSA listed some eighteen separate organizations who were collecting vulnerability information in one form or another!" writes one anonymous author, who seems exasperated at the lack of a more unified, coherent approach to keeping track of this valuable information. "Intelligence operatives wish to protect their sources and methods" for collecting, the author writes. "No one really knows how much knowledge exists in each sector." And without that knowledge, there could be no "large-scale national" approach to cyber war, which, the journal makes clear, is not only something NSA wanted, but was directed to do by the Pentagon.
The journal also shows the extent to which the NSA feared that US networks were vulnerable to the very kinds of attacks the agency was imagining. Yet then, as now, there was insufficient understanding of just how much risk private networks faced, because makers and users of technology were reluctant to disclose their own vulnerabilities.
"Companies wish to maintain consumer confidence and their competitive advantage," one author writes. The NSA's efforts weren't helped by poor public relations. "The public sees the government as the bad guy," writes Bill Black, who later became the NSA's deputy director. "Specifically, the focus is on the potential abuse of the Government's applications of this new information technology that will result inan invasion of personal privacy." This was hardly news even in 1997. And though it's still true today, there is perhaps greater concern by companies and technology manufacturers that they will be held legally liable when their vulnerable products enable a cyber attack or an intrusion.
The whole journal makes for fascinating reading. It's infused with both respect for and anxiety about the power of technology, and its rapid, unyielding proliferation in the world. The opportunities and the threats of a global network are presented as complex, risky, and yet impossible to ignore. In this respect, it is striking how little has changed.
Reuters reports that the Treasury Department is going to give US intelligence agencies full access to a large amount of financial information that it obtains from banks and other institutions. This includes reports of money transfers that are routinely used to track terrorist and criminal finances around the world.
A plan being drawn up by the Obama administration would link the Financial Crimes Enforcement Network (FinCEN) to the Joint Worldwide Intelligence Communications System (JWICS), which is essentially a classified intranet for the Defense Department and intelligence agencies. Those agencies already have had access to FinCEN data, but only on a case-by-case basis. Now, Reuters reports, agencies like the CIA and the National Security Agency are going to be plugged into the financial data network and have unprecedented ability to roam around.
This would be an incremental change in policy. But don't overlook its significance.
More and more lately, the government is focusing on what agencies do with the data they collect, rather than the means of collection. US law is currently oriented mostly to regulate collection. And practically speaking, the government can collect a lot--a whole lot.
It's all the post-collection activity--the moving and shaping and sharing storing of information--that we know far less about. Who can see it? How long can an agency hold onto it? What kinds of technologies are applied to make sense of it? These are arguably more important questions than how a piece of information was collected if you're truly concerned about protecting privacy and civil liberties, and if you want to know whether that glut of information coming into the system is actually keeping the country safer.
FinCEN is a good example of how collection really isn't novel anymore. It has a massive a massive data set based on routine and voluminous standardized reporting from banks and other financial institutions. It most famously includes so-called suspicious activity reports that institutions are required to file whenever they notice transactions or money transfers that might indicate criminal activity.
This collection occurs on a broad and massive scale. According to the Treasury Department, US financial institutions file more than 15 million suspicious activity reports every year about transactions that exceed $10,000. Only a fraction of them could involve criminal activity. But the end result is that if any significant amount of money moves from one set of hands to another through the US financial system, FinCEN is supposed to know about it.
And what it knows tells investigators a lot about the nature of organized crime and terrorist networks. FinCEN was around before the 9/11 attacks, and it earned a reputation among investigators for being a well-run operation with useful tools for peering into money laundering operations and detecting fraud. After the terrorist attacks, it became the centerpiece in an interagency effort--i.e., a shared information effort--to track down the conduits of terrorist money, and by extension, the terrorist themselves.
In the past few months, there have been other examples of the government shifting attention from collection of information towards analysis and sharing among different agencies, on the theory that the more access analysts and investigators have, the more likely they are to crack a case or find a lead.
The National Counterterrorism Center is now allowed to hold onto information about airline passengers, international travelers, and a host of other categories for five years, a much longer period of time than previously allowed.
The NSA has been sharing more information about cyber security threats with the Homeland Security Department, in an effort to protect critical infrastructure. And in the coming weeks that effort will be extended to the private sector, as the government gives threat signature data to US telecommunications companies, so that they can monitor their networks for malicious code and intrusions.
Each of these changes involves information that the government already collects, legally. But from the administration's perspective, information that sits in once place often loses its value. That explains the shift in policy at FinCEN and elsewhere. The information is flowing more freely now, and the volume and frequency of that flow is going to increase.
The Obama administration is about to pull US telecommunications companies even deeper into the ongoing cyber conflict with China.
Foreign Policy reports that in the coming weeks, the National Security Agency, in concert with the Homeland Security Department and the FBI, "will release to select American telecommunication companies a wealth of information about China's cyber-espionage program." The idea behind this reportedly classified operation is to give the telecoms more information about how Chinese cyber spies ply their trade, so that American companies can in turn get ahead of the threat and better defend themselves.
The information the government wil share with the companies includes "sophisticated tools that China uses, countermeasures developed by the NSA, and unique signature-detection software that previously had been used only to protect government networks," FP reports.
This marks an escalation in the so-called "public-private partnership" that has existed for a few years now in the ever-expanding cyber battlefield. The government has already been sharing with telecom companies some domain names and Internet addresses associated with suspected spies and hostile actors. The companies which run and manage the country's networks, in turn are expected to exercise some level of surveillance and defense, which theoretically redounds to the benefit of their customers.
This hasn't really made cyberspace any safer, nor has it significantly reduced cyber espionage and malware attacks against US companies. So now, the government is effectively giving the companies more cyber "ammo," in the form of richer, and more secretive intelligence, which it has traditionally guarded. In theory, the companies will have greater insight into how spies are trying to crack their networks.
The timing of this event doesn't seem coincidental. In February, computer security firm Mandiant released a report naming the Chinese military as a major source of espionage against U.S. companies. I'm told by knowledgable sources that the release of that report was coordinated with the Defense Department and the Homeland Security Department, which just a day earlier released much of the same threat information that's in the Mandiant report, but without attributing the source to China. Like the new information-sharing program, these are not rhetorical strategies, but rather tactical attempts to push back against cyber spying and give US companies more means to defend themselves.
The Obama administration has long understood that in order to defend cyberspace, it's going to have to enlist the cooperation and active participation of US companies. The US government, for all its technical intelligence prowess, simply cannot defend a network infrastructure that is almost entirely owned and operated by the private sector.
For their part, companies have been itching to get more information and to change the often one-way flow of threat information from the private sector to the government. Companies know they're networks are threatened, but they often don't know much about the sources of those intrusions, and what else the intruders are capable of doing. They need a government intelligence agency to obtain that information--mainly through espionage, which companies can't legally practice on their own.
Yesterday, the chief information officer for Dow Chemical Company told a Senate panel that he'd like to see more information sharing from the government to industry, and among different sectors of US companies. He's about to get some of what he asked for.
To some extent, this information exchange has been happening already. For the past few years, US defense contractors have been sharing threat information with the government and allowing government agencies to monitor their networks, so the intelligence community can gather information about US adversaries, and how they work.
Now, though, the administration is pushing this cooperation even deeper into the telecom sector, essentially taking the fight down to the level of the network operators. That's a significant development. Think of this as deputizing some companies in the new cyber war. We're going to see a lot more of this in the future.
Google is expanding its regular “transparency report” to include some broad statistics on the numbers of national security letters it receives from the US government. It’s a significant step for the company to publicly disclose what it privately tells the authorities about its users, and it gives us some more insight into how the government monitors the vastness of the Internet.
The numbers Google is reporting are broad. But the big takeaway here is that the FBI--the primary user of national security letters--appears to be interested not so much in the content of a person's email, but rather in what's known as "basic subscriber information," more high-level data such as a person's name, address, and the length of service on his account. This information is potentially more useful, and surely easier to get, than the written contents of an e-mail.
At first glance, the numbers of NSL requests Google is reporting look “awfully high” for one company, says Cato’s Julian Sanchez, who breaks down the report and places it the context of what we already know about how NSLs, which are notoriously opaque tools for secretly obtaining information, are used.
Comparing the Google numbers for NSLs to those released by the Justice Department, one might conclude that the company received one-seventh of all NSL requests, something Sanchez concludes “seems impossible.” Google is big, but not so big that it would account for an outsized share of all NSLs relative to every other company that receives them. Telecommunications companies, including phone and Internet service providers, as well as financial institutions regularly get NSLs, which require companies to hand over different kinds of information short of the actual content of a message.
So why are Google’s numbers so high? Sanchez persuasively argues that Google is counting requests for basic subscriber information, and that the Justice Department, in its own NSL reports, is not. Looking at Google’s numbers, it would appear that the “overwhelming majority” of NSL requests it receives are for this basic subscriber information, Sanchez writes, which suggests, troublingly, “that the total number of Americans affected by all NSLs is thus vastly, vastly larger than the official numbers would suggest.”
I think Sanchez is right. And it makes sense based on what we know about how law enforcement and intelligence agencies use electronic information to track people and monitor the Internet for various threats.
For instance, shortly before the 9/11 attacks, the National Security Agency asked Qwest Communications for subscriber information on its then-quickly expanding communications network. The NSA’s goal was to monitor the Internet for potential cyber threats against the government. (This was years before cyber security became de rigueur in national security circles, so this was a very foresighted move by the NSA.) After the attacks, the NSA again made the request, this time for tracking terrorists.
Qwest refused, however, after concluding that access to such detailed customer information was illegal without a warrant. Qwest executives and lawyers decided that even though the information wasn't technically "content," it was still revealing enough that giving it to the government required some legal approval.
This is an important point. Call logs and records of phone calls may be called “basic” information under the law, but they are full of rich, potentially illuminating information about a person. Today, government agencies, including the NSA, use basic data, particularly phone logs and Internet addresses, to create detailed pictures of a person’s communications and his associations. It doesn’t really matter, in this context, that the data doesn’t include the text of an e-mail or the spoken words of a phone call.
The Google disclosure underscores the extent to which the government is after this kind of general data, more so than actual content. National security letters are not warrants, but they’re being used today to obtain information of the kind that the NSA wanted from Qwest. This should come as no surprise, given how well the NSA, and the FBI, anticipated the ways that digital technology would transform communication, and how that would, in turn, give the government new opportunities for collecting information.
The way the FBI is using national security letters today, if Sanchez's analysis is correct, suggests that written e-mails aren’t really what investigators want most. It’s easier under the law to get basic information, and that information can tell them a lot about their targets, often more than the text of an e-mail itself. Think about it: How likely is a suspected terrorist to spell out his intentions in a message? You’d learn a lot more about his capability to do harm by positioning him within a bigger terrorist network, and you can understand and illuminate that network with the kinds of information that Google and other NSL recipients provide. This broad information is also useful to investigators when they're trying to identify individuals who they can scrutinize more closely with searches that require a warrant.
The other reason why a government agency would want this kind of basic information? “To effectively de-anonymize the otherwise unknown user of a particular account,” Sanchez says. That's just what investigators did when they determined that Paula Broadwell was sending anonymous e-mails to a friend of Gen. David Petraeus. Sanchez speculates that this digital de-cloaking may be “the primary reason” an agency would ask Google for basic subscriber information.
There’s an important wrinkle in all of this. Google also said that when it receives NSLs, it doesn’t disclose Internet protocol addresses. “Since these can be crucial to linking a wide array of online activity to a particular user, their exclusion would somewhat limit the potential of NSLs to undermine Internet anonymity,” Sanchez writes. But it could be that this exclusion is just a Google policy. Sanchez concludes that “it is not at all clear whether other providers will disclose IP addresses in response to NSLs.”
We should also keep in mind that NSLs are not the only means by which companies share information with the government, nor are IP addresses the only way to unmask someone or provide useful intelligence for investigators. Nevertheless, this is an enlightening report, and it adds to the ever-accreting body of details about how the government watches us, and what companies are doing to comply with the law and at the same time protect their customers’ information. Never an easy balance. It’ll only get harder.
More than a decade after the 9/11 terrorist attacks, a set of extraordinary and secretive surveillance programs conducted by the National Security Agency has been institutionalized, and they have grown.
These special programs are conducted under the code name Ragtime, and are divided into several subcomponents, according to the new book Deep State: Inside the Government Secrecy Industry, by Marc Ambinder and D.B. Grady. (I purchased a copy this morning.)
The authors, both journalists who cowrote a previous book about special operations in the military, have dug deep into the code names and operational nitty gritty of the NSA's secretive and hugely controversial surveillance programs, and they've come up with impressive new details.
Ragtime, which appears in official reports by the abbreviation RT, consists of four parts.
Ragtime-A involves US-based interception of all foreign-to-foreign counterterrorism-related data;
Ragtime-B deals with data from foreign governments that transits through the US;
Ragtime-C deals with counterproliferation actvities;
and then there's Ragtime-P, which will probably be of greatest interest to those who continue to demand more information from the NSA about what it does in the United States.
P stands for Patriot Act. Ragtime-P is the remnant of the original President’s Surveillance Program, the name given to so-called "warrantless wiretapping" activities after 9/11, in which one end of a phone call or an e-mail terminated inside the United States. That collection has since been brought under law, but civil liberties groups, journalists, and legal scholars continue to seek more information about what it entailed, who was targeted, and what authorities exist today for domestic intelligence-gathering.
Deep State has some answers.
Only about three dozen NSA officials have access to Ragtime's intercept data on domestic counter-terrorism collection. That's a tiny handful of the agency's workforce, which has been pegged at about 30,000 people.
As many as 50 companies have provided data to this domestic collection program, the authors report.
If the NSA wants to collect information on a specific target, it needs one additional piece of evidence besides its own "link-analysis" protocols, a computerized analysis that assigns probability scores to each potential target. This is essentially a way to use a computer data-mining program to help determine whether someone is a national security threat. But the authors find that this isn't sufficient if NSA wants to collect on said target. And while the authors found that the Foreign Intelligence Surveillance Court rarely rejects Ragtime-P requests, it often asks the NSA to provide more information before approving them.
How the surveillance is approved tells us a lot about the breadth of the NSA's intelligence gathering. The court and the Attorney General both certify a slate of approved targets under Ragtime-P, the authors find. That includes a certain amount of "bulk data"—such as phone call logs and records—that can be collected around those targets. An NSA official told the authors that Ragtime-P can process as many as 50 different data sets at one time.
What happens next looks like a 21st-century data assembly line. At the NSA's headquarters in Fort Meade, Maryland, a program called Xkeyscore processes all intercepted electronic signals before sending them to different "production lines" that deal with specific issues. Here, we find another array of code names.
Pinwale is the main NSA database for recorded signals intercepts, the authors report. Within it, there are various keyword compartments, which the NSA calls "selectors."
Metadata (things like the "To" and "From" field on an e-mail) is stored in a database called Marina. It generally stays there for five years.
In a database called Maui there is "finished reporting," the transcripts and analysis of calls. (Metadata never goes here, the authors found.)
As all this is happening, there are dozens of other NSA signals activity lines, called SIGADS, processing data. There's Anchory, an all-source database for communications intelligence; Homebase, which lets NSA analysts coordinate their searches based on priorities set by the Director of National Intelligence; Airgap, which deals with missions that are a priority for the Department of Defense; Wrangler, an electronic intelligence line; Tinman, which handles air warning and surveillance; and more.
Lest you get confused by this swirl of code names and acronyms, keep this image in mind of the NSA as a data-analysis factory. Based on my own reporting, the agency is collecting so much information every day that without a regimented, factory-like system, analysts would never have the chance to look at it all. Indeed, they don't analyze much of it. Computers handle a chunk, but a lot of information remains stored for future analysis.
So who is monitoring this vast production to ensure that the communications of innocent Americans aren't spied on? Ambinder and Grady report that for the NSA's terrorism-related programs, the agency's general counsel's office regularly reveals "target folders," which contain the identities of those individuals who are under surveillance, "to make sure the program complied with the instruction to surveil those reasonably assumed to have connections to al-Qaeda."
That the NSA is policing itself may come as small comfort to many critics of the Obama administration's intelligence programs. The size of the "compliance staff" that monitors this activity is only about four or five people, depending on what's available in the budget at any moment, the authors report. They also say that we cannot know whether the program is pushing beyond the boundaries of the law.
However, outside the closed circle of about three dozen NSA employees who are read in to Ragtime, there more than 1,000 people "outside the NSA are privy to the full details of the program." If NSA is breaking the law, "how much longer can that secret last?" the authors ask.
We have a preceding example to test this hypothesis, albeit in a limited fashion. In 2004, the senior leadership of the Justice Department and the FBI threatened to resign over what they saw as illegal collection activities at the NSA, collection activities that are still going on under Ragtime and under new surveillance law.
Back then, James Comey, acting as Attorney General while John Ashcroft was in the hospital, refused to sign a set of certifications provided by the Justice Department to Internet, financial, and data companies, the authors report. Why? Comey believed that the justification for providing bulk data to the NSA wasn't sufficient.
The administration's tortured logic "drove him bonkers. There was just no way to justify this," the authors report, quoting people who have spoken to Comey, who has never publicly said why he objected. Interestingly, the authors find that the parts of the program he was objecting to didn't implicate the Foreign Intelligence Surveillance Act.
This comports with my own reporting in my book, The Watchers. The NSA was making "mirrors" of telecommunications databases, so that analysts could go through the data and mine it for clues. As it has been explained to me, the problem here dealt with how the government viewed its legal authorities to access data stored in computers, and whether analysts could dip back into it without specific authorizations. Importantly, this data consisted of that so-called "bulk data." It wasn't recorded phone calls or the text of e-mails. That information was governed by FISA--or should have been--because it was considered "content" under law, and that requires a warrant to obtain.
The White House panicked when Comey and Ashcroft refused to sign off, Ambinder and Grady report, fearing that the companies on which NSA was depending for information would cut the agency off if they didn't get a signed order from the Attorney General himself. It took six months for the administration to reshape the program so that it comported with "interpretation of the metatdata provisions" that were promulgated by the Justice Department's Office of Legal Counsel.
Had these officials resigned, it's unthinkable that the secrets of NSA's intelligence gathering activities would have stayed hidden. A year later, in 2005, they were revealed in part by the New York Times. Here, too, Ambinder and Grady have some new insights. It turns out that while the NSA's director, General Michael Hayden, was publicly excoriating the newspaper for disclosing the classified activities, he was privately glad that they withheld what he considered key operational details.