My talk at the Digital Agenda Assembly  21.6.13

I had a few lively days here in Dublin. Not only could I escape the mind-softening heat in central Europe and enjoy Ireland’s more bracing climate. The European Commission’s arm for all things digital, DG Connect (formally: the European Commission Directorate General for Communications Networks, Content and Technology), and the Irish EU Presidency had invited to this year’s Digital Agenda Assembly to reflect on the progress of and further opportunities for the Digital Agenda for Europa. The first half of the two-days event was packed with seven all-day workshops, and I had the pleasure to share a panel with a number of accomplished gentlemen (this perfectly represents the piteous excess of men in the infosec scene) in the workshop on “Building an open, safe & secure cyberspace.” Giuseppe Abbamonte, the author of substantial parts of Europe’s cyber-strategy and the Impact Assessment that accompanies the proposed Directive on Network and Information Security, convincingly explained the reasons for the need for a European approach: Only ten member states had developed a convincing strategy against cybercrime, the rest of the pack lags terribly behind. Frederic Martinez of Alcatel Lucent shared details from the trenches. MEP Malcolm Harbour added insights from the European Parliament. Nick Coleman, IBM’s Head of  Global Cyberintelligence, talked about responsibilities and processes. I then played the role of the academic and did what we are best at: raising questions and doubts, widening the perspective, and thereby provide ideas that are hopefully not  applicable in the office the next day. 

Here’s roughly what I’ve talked about:

—-

Following the contributions from representatives from industry, policy making and regulatory authorities, I’d like to address two things in my statement:
First, I give a brief summary over my field of research, i.e. Internet and network security from an political, economic, and organisational angle, share some commons wisdoms of that field, but also highlight some issues where we can’t give contribute substantial knowledge yet.
Secondly, I want to contextualise the proposed NIS directive in the wider context of our search for appropriate forms for the governance and provisioning of internet security.

It is save to say that in our field of research, it is widely accepted knowledge that the incentives among all actors are misaligned. Those actors who have the technical and organisations capabilities to mitigate ongoing attacks, invest in mechanisms to prevent them in the first place and help to increase the overall resilience of ICT systems often have too little economic incentives to actually intervene and help improve the situation. Everyone has reasons to ignore the need to step up in the cybersecurity game until they are themselves hit by an attack. Vendors of software and hardware, Internet services and hosting providers, end users, and even police forces have plausible reasons why security is not high up in their agenda, even if things might have slightly changed here in the last few years. The ensuing scientific discussion has therefore focussed on how to raise incentives for actors who can make a difference in NIS. ISPs were soon identified as a potential regulatory object, as they appear to have capabilities required to mitigate ongoing incidents.
But there certainly still are a quite a number of puzzles to which our field of research can’t make sufficient contributions. And as our search for good regulatory interventions will go on for a while, we might want to answer them. Good regulation should ideally be based on facts, not the unkwown unknown. Among the the questions whe have no suffient answers to yet are:
* Which intermediaries act responsibly and help to respond to ongoing attacks and structural, long-term risks?
* Which containment strategies against botnets or malware work best?
* Which owners of networks are negligent when it comes to security and which set good examples?

These admittedly are very specific questions. But we also have some wider, more general puzzles that need to be solved. The by and large discerning Impact Assesment, which accompanied the NIS directive proposal and was prepared by the Commission staff, has highlighted the previous (and still existing) voluntary approach to cybersecurity as a partial failure. My inner researcher however would consider the generalised statement that the voluntary approach has failed not as proven knowledge, but rather as a hypothesis. It leaves unanwered which elements and institutions of that voluntary approach to security governance have not worked? And which have? And why?
Before we kiss these voluntary approaches goodbye and replace them with public capabilities and institutions, we’d better have some answers to these questions.

This leads us—and this is my second point—to a fundamental issue, potentially the most momentous of all internet politics issues: Which institutions do we want for internet security governance? How do we want to govern and provision it in the future? And which modalities of sharing do we want to use?

Internet security governance and production is a wicked game. It is is such a tricky thing for a number of reasons:
a) It’s about security. And security policy usually involves force, enforcement, and secrecy. None of these factors fit particularly well to the much heralded ideals of transparency and openness.
b) It’s a transnational issue. The distributiveness of the problem, of incidents, of systems involved, of perpetrators and attackes, of actors required for mitigation require global solutions.
c) It mingles foreign with domestic security, and foreign policy with public policy. The practices of foreign and national security have traditionally differed from those in the domain of homeland security. The transfer of the former substantially changes the latter and our societies.
d) All of that results in a potentially precarious state of legitimacy of internet security policies.

So how do we govern for internet security? And which institutions for sharing do we want?

To give you an idea of the range of possibiliities that might be applied or are applied, I’d like to describe two ideal-type approaches of institutions for internet security. The types fundamentally differ in their inner organisation and governance model, their legitimacy model, their access restrictions, their use of hierarchies, their application of coercion, their scaleability and flexibility, the role of trust and authority.

The first type is called the “information hegemony”. The information hegemon achieves all-encompassing situational awareness by technical and organisation means. His superior knowledge is shared with like-minded allies, who in turn share their proprietary knowledge and data with the hegemon, which results in an even broader picture for the hegemon. The hegemon is equipped with informational resources and technologies that allow him to identify and mitigate security threats irrespective their geographical location.

The second model is a global network of communities of experts. These experts come from different constituencies, mostly IT operations, but also from law enforcement, police, or CERTs. Members of these communities share information and collaborate on certain technologies, internet services, geographies, or actual incidents. They are self-governed, bottom-up, distributed. Access to them however is restricted and depends on existing trust-relationships with existing members.

These are the ideal-type and at least partially existing governance models that are in place to increase cybersecurity.

As a closing remark, let me add a few words on the NIS directive proposal. The proposed EU-model would establish a new security network, but one that differs from existing Internet security communities set up by technical experts. The NIS directive proposes a “cooperation network”, in which the Commission and the planned national “competent authorities” (possibly addenda to existing national CERTs), share information on risks and actual attacks. The “cooperation network” will certainly help to overcome some of the knowledge problems I’ve described above. (And the envisaged Adcanced Cyber Defence Centre, which aims at nothing less but getting rid of botnets and bots, will help here, too.) The will help raising the security standards in public adminstrations and some businesses that have so far not invested in the resilience of their networks. So the directive might very well be a nucleus for improvement.

But, in the light of recent events, we also need to make sure that these new state-controlled capabilities don’t pave the way for a slippery slope into worse. Security institutions always bear the risk of becoming a risk to other aspects of security. My hunch is that our existing capabilites to oversee security institutions and bind them to the public will are insufficient. Especially in the emerging domain of public NIS institutions.

The unfolding of the information umbrella  10.6.13

“We are the leaders, we can be the information hegemon.” (David Rothkopf)1

Well, who would be surprised that NSA apparently sniffs the hell out of the databases located on data centers on U.S. soil, operated by American companies. The writing has been on the wall for at least fifteen years. Numerous high level persons have said enough for anyone to connect the dots. The strategy is obvious, has always been. It might surely help to discover some terrorists. It also helps to keep your hegemony going smoothly for another while. Informational supremacy supplements US dominance in military affairs, global political institutions, currency and financial markets, and global cultural affairs. It is playing its game very nicely. Accidents happen, but 

An eye-opener was Joseph Nye’s “America’s Information Edge”, co-authored with William Owens and published in Foreign Affairs in March/April 1996.2 While the article focussed on military information systems, it’s blending of military dominance based on superior information system with information-based soft power spurred imaginations of how else information systems can be used to foster a nation’s relative power in global politics. Enter the information umbrella.

“These capabilities [dominant situational knowledge] point to what might be called an information umbrella. Like extended nuclear deterrence, they could form the foundation for a mutually beneficial relationship. The United States would provide situational awareness, particularly regarding military matters of interest to other nations. Other nations, because they could share this information about an event or crisis, would be more inclined to work with the United States. … As its capacity to provide this kind of information increases, America will increasingly be viewed as the natural coalition leader, not just because it happens to be the strongest but because it can provide the most important input for good decisions and effective action for other coalition members. Just as nuclear dominance was the key to coalition leadership in the old era, information dominance will be the key in the information age.”

Martin Libicki added more details to the Nye/Owens’ information umbrella strategy in 1998, which should replace the Cold War nuclear strategy. A “system of systems” should be established and other nations should be granted access to parts of in on a quid-pro-quo basis:

“The quid would be access to the System’s services and data, including feeds (e.g., those covering global flashpoints, movement tracks, ambient conditions), indicators (e.g., crime reports in certain categories, sectoral business activity) and monitors (e.g., traffic, pollution, switch activity). The quo would be, in effect, the System’s access to a nation’s spaces (e.g., a very open skies regime) as well as to extant monitors and databases.”3

Libicki’s calculation, elaborated in a section headlined “The System as Strategy”, apparently was that the “system of system” would be so expensiv and complex and yield network effects galore that no other international contender would be able to trump the US-initiated system:

“The United States can be aggressively generous in giving away its information and access to its structure… (…) [T]he underlying economies of scale in fielding sensors or integrating systems to illuminate the world may yield results similar to what global markets are achieving.”

Giving first shots away for free to first attract users, creating dependencies, increasing value by network effects, and thus raising exit costs over time has been an essential feature of the drug lords and ICT industry ever since. Rephrasing Max Boot on the art of leading an empire (I don’t have the source here right now, hence no cite): To ensure it’s very survival, an empire needs to dry up dangerous competencies of potential rivals. With the creation of an information umbrella, the US establishes itself as global security provider. It’s services can be enjoyed by other nations as long as they relinquish parts of their national sovereignty in security matters to the imperial security system.

And then came that series of events that “represents a failure of intelligence, law enforcement, information management – and technology.”4 With technology the cause, technology was the cure. The Markle Foundation Task Force, a joint working group by Markle Foundation, Brookings Institue and CSIS, was the most comprehensive attempt to contemplate about the use of IT against terrorism in a think tank environment. The goal the task force set itself: “Exploiting America’s IT Advantage.”5 The task force suggested to build an organisational and technical network, in which intelligence agencies, law enforcement, local, state, and federal bureaucracies, the military, and private enterprises would share all that data and information that could potentially be valuable to detect future terrorist attacks. The potential data sources for the envisaged “Systemwide Homeland Analysis and Resource Exchange Network” (SHARE) were countless:

„[I]mportant information or analytical ability resides not just in the 14 intelligence components of the federal government and federal law enforcement and security agencies, but also with the 17,784 state and local law enforcement agencies, 30,020 fire departments, 5,801 hospitals and the millions of first responders who are on the frontlines of the homeland security effort. Add to this the thousands of private owners and operators of critical infrastructures, who are responsible for protecting potential targets of terrorist attacks, and the many more private companies that may have information in their databases that could lead to the prevention of terrorist activity.”6

The range of information objects that were deemed relevant is no less impressive: It starts with details on „birth, deaths, and marriages“ printed on marriage, birth, death, and divorce certificates (collected by VitalCheck), continues to the categorie „Internet“ with information objects like „file postings“ und „website search history“ (collected by ISPs such as AOL, MSN, Yahoo, CompuServe, EarthLink or search engine providers such as Google, Altavista, MapQuest, and Ebay), to the category „lifestyle interest“ with information objects such as „cable-viewing history“, „product activation“, or „Internet opt-in news sources“, and finally concludes with the category „work force“ mit information objects such as names of persons working on bridgeds, dams, and harbours.

That’s what an illustrious circle from US think tanks, IT industry, academia, intelligence, and media such as James Lewis (CSIS), Craig Mundie (Microsoft), Ashton Carter (then Harvard U, now Dep MoD), Esther Dyson, Amitai Etzioni (old hand thinker who drafted Kennedy’s gradualism and, more recently, the idea of a Global Safety Authority), David J. Farber (Carnegie Mellon U), James Dempsey (CDT), Eric Holder (then Covington & Burling, now Attorney General), Gilman Louie (In-Q-Tel), and Winston Wiley (Booz Allen Hamilton, the company with that by now presumably former employee), to only name a few, came up with in 2002/2003. The idea of IT as a panacea to the ill of terrorism was formed in those months a good ten years ago. Given the way how European authorities and legislations continue to feed the deep data throat on the other side of the pond, the counter-terrorism strategy has been successfully merged with the foreign policy strategy of the information umbrella. 


  1. David Rothkopf, then Visiting Fellow at Carnegie, CEO of Intellibridge, formerly at Kissinger Associates, quoted in: Carnegie Endowment for International Peace (2000). “Cyberpolitik: The Information Revolution and U.S. Foreign Policy.” 22.03.2000. URL: http://www.ceip.org/files/events/cyberpolitik.asp?p=5&EventID=51 (04.05.2004) ↩

  2. Nye, Joseph S., und William A. Owens (1996). “America’s Information Edge.” In: Foreign Affairs 2 (March/April 1996). S. 20-36. URL: http://search.epnet.com/direct.asp?db=bsh&jn=%22FAF%22&scope=site. ↩

  3. Libicki, Martin (1998). “Information War, Information Peace.” In: Journal of International Affairs 2 (Spring 1998). S. 411-428. ↩

  4. Ham, Shane (2002). “Winning with Technology.” In: Blueprint Magazine (16.01.2002). URL: http://www.ppionline.org/ppi_ci.cfm?knlgAreaID=140&subsecID=900017&contentID =250038 (19.05.2004). ↩

  5. Markle Foundation – Task Force on National Security in the Information Age (2002). “Protecting America’s Freedom in the Information Age. Second Report of the Markle Foundation Task Force.” Zoë Baird, James Barksdale Chairmen, Michael A. Vatis Executive Director. October 2002. URL: http://www.markletaskforce.org/documents/Markle_Full_Report.pdf (19.05.2004). ↩

  6. Markle Foundation – Task Force on National Security in the Information Age (2003). “Creating a Trusted Network for Homeland Security. Second Report of the Markle Foundation Task Force.” Zoë Baird, James Barksdale Chairmen, Michael A. Vatis Executive Director. December 2003. URL: http://www.markle.org/news/Report2_Full_Report.pdf (19.05.2004). ↩