My talk at the Digital Agenda Assembly  21.6.13

I had a few lively days here in Dublin. Not only could I escape the mind-softening heat in central Europe and enjoy Ireland’s more bracing climate. The European Commission’s arm for all things digital, DG Connect (formally: the European Commission Directorate General for Communications Networks, Content and Technology), and the Irish EU Presidency had invited to this year’s Digital Agenda Assembly to reflect on the progress of and further opportunities for the Digital Agenda for Europa. The first half of the two-days event was packed with seven all-day workshops, and I had the pleasure to share a panel with a number of accomplished gentlemen (this perfectly represents the piteous excess of men in the infosec scene) in the workshop on “Building an open, safe & secure cyberspace.” Giuseppe Abbamonte, the author of substantial parts of Europe’s cyber-strategy and the Impact Assessment that accompanies the proposed Directive on Network and Information Security, convincingly explained the reasons for the need for a European approach: Only ten member states had developed a convincing strategy against cybercrime, the rest of the pack lags terribly behind. Frederic Martinez of Alcatel Lucent shared details from the trenches. MEP Malcolm Harbour added insights from the European Parliament. Nick Coleman, IBM’s Head of  Global Cyberintelligence, talked about responsibilities and processes. I then played the role of the academic and did what we are best at: raising questions and doubts, widening the perspective, and thereby provide ideas that are hopefully not  applicable in the office the next day. 

Here’s roughly what I’ve talked about:

—-

Following the contributions from representatives from industry, policy making and regulatory authorities, I’d like to address two things in my statement:
First, I give a brief summary over my field of research, i.e. Internet and network security from an political, economic, and organisational angle, share some commons wisdoms of that field, but also highlight some issues where we can’t give contribute substantial knowledge yet.
Secondly, I want to contextualise the proposed NIS directive in the wider context of our search for appropriate forms for the governance and provisioning of internet security.

It is save to say that in our field of research, it is widely accepted knowledge that the incentives among all actors are misaligned. Those actors who have the technical and organisations capabilities to mitigate ongoing attacks, invest in mechanisms to prevent them in the first place and help to increase the overall resilience of ICT systems often have too little economic incentives to actually intervene and help improve the situation. Everyone has reasons to ignore the need to step up in the cybersecurity game until they are themselves hit by an attack. Vendors of software and hardware, Internet services and hosting providers, end users, and even police forces have plausible reasons why security is not high up in their agenda, even if things might have slightly changed here in the last few years. The ensuing scientific discussion has therefore focussed on how to raise incentives for actors who can make a difference in NIS. ISPs were soon identified as a potential regulatory object, as they appear to have capabilities required to mitigate ongoing incidents.
But there certainly still are a quite a number of puzzles to which our field of research can’t make sufficient contributions. And as our search for good regulatory interventions will go on for a while, we might want to answer them. Good regulation should ideally be based on facts, not the unkwown unknown. Among the the questions whe have no suffient answers to yet are:
* Which intermediaries act responsibly and help to respond to ongoing attacks and structural, long-term risks?
* Which containment strategies against botnets or malware work best?
* Which owners of networks are negligent when it comes to security and which set good examples?

These admittedly are very specific questions. But we also have some wider, more general puzzles that need to be solved. The by and large discerning Impact Assesment, which accompanied the NIS directive proposal and was prepared by the Commission staff, has highlighted the previous (and still existing) voluntary approach to cybersecurity as a partial failure. My inner researcher however would consider the generalised statement that the voluntary approach has failed not as proven knowledge, but rather as a hypothesis. It leaves unanwered which elements and institutions of that voluntary approach to security governance have not worked? And which have? And why?
Before we kiss these voluntary approaches goodbye and replace them with public capabilities and institutions, we’d better have some answers to these questions.

This leads us—and this is my second point—to a fundamental issue, potentially the most momentous of all internet politics issues: Which institutions do we want for internet security governance? How do we want to govern and provision it in the future? And which modalities of sharing do we want to use?

Internet security governance and production is a wicked game. It is is such a tricky thing for a number of reasons:
a) It’s about security. And security policy usually involves force, enforcement, and secrecy. None of these factors fit particularly well to the much heralded ideals of transparency and openness.
b) It’s a transnational issue. The distributiveness of the problem, of incidents, of systems involved, of perpetrators and attackes, of actors required for mitigation require global solutions.
c) It mingles foreign with domestic security, and foreign policy with public policy. The practices of foreign and national security have traditionally differed from those in the domain of homeland security. The transfer of the former substantially changes the latter and our societies.
d) All of that results in a potentially precarious state of legitimacy of internet security policies.

So how do we govern for internet security? And which institutions for sharing do we want?

To give you an idea of the range of possibiliities that might be applied or are applied, I’d like to describe two ideal-type approaches of institutions for internet security. The types fundamentally differ in their inner organisation and governance model, their legitimacy model, their access restrictions, their use of hierarchies, their application of coercion, their scaleability and flexibility, the role of trust and authority.

The first type is called the “information hegemony”. The information hegemon achieves all-encompassing situational awareness by technical and organisation means. His superior knowledge is shared with like-minded allies, who in turn share their proprietary knowledge and data with the hegemon, which results in an even broader picture for the hegemon. The hegemon is equipped with informational resources and technologies that allow him to identify and mitigate security threats irrespective their geographical location.

The second model is a global network of communities of experts. These experts come from different constituencies, mostly IT operations, but also from law enforcement, police, or CERTs. Members of these communities share information and collaborate on certain technologies, internet services, geographies, or actual incidents. They are self-governed, bottom-up, distributed. Access to them however is restricted and depends on existing trust-relationships with existing members.

These are the ideal-type and at least partially existing governance models that are in place to increase cybersecurity.

As a closing remark, let me add a few words on the NIS directive proposal. The proposed EU-model would establish a new security network, but one that differs from existing Internet security communities set up by technical experts. The NIS directive proposes a “cooperation network”, in which the Commission and the planned national “competent authorities” (possibly addenda to existing national CERTs), share information on risks and actual attacks. The “cooperation network” will certainly help to overcome some of the knowledge problems I’ve described above. (And the envisaged Adcanced Cyber Defence Centre, which aims at nothing less but getting rid of botnets and bots, will help here, too.) The will help raising the security standards in public adminstrations and some businesses that have so far not invested in the resilience of their networks. So the directive might very well be a nucleus for improvement.

But, in the light of recent events, we also need to make sure that these new state-controlled capabilities don’t pave the way for a slippery slope into worse. Security institutions always bear the risk of becoming a risk to other aspects of security. My hunch is that our existing capabilites to oversee security institutions and bind them to the public will are insufficient. Especially in the emerging domain of public NIS institutions.

The unfolding of the information umbrella  10.6.13

“We are the leaders, we can be the information hegemon.” (David Rothkopf)1

Well, who would be surprised that NSA apparently sniffs the hell out of the databases located on data centers on U.S. soil, operated by American companies. The writing has been on the wall for at least fifteen years. Numerous high level persons have said enough for anyone to connect the dots. The strategy is obvious, has always been. It might surely help to discover some terrorists. It also helps to keep your hegemony going smoothly for another while. Informational supremacy supplements US dominance in military affairs, global political institutions, currency and financial markets, and global cultural affairs. It is playing its game very nicely. Accidents happen, but 

An eye-opener was Joseph Nye’s “America’s Information Edge”, co-authored with William Owens and published in Foreign Affairs in March/April 1996.2 While the article focussed on military information systems, it’s blending of military dominance based on superior information system with information-based soft power spurred imaginations of how else information systems can be used to foster a nation’s relative power in global politics. Enter the information umbrella.

“These capabilities [dominant situational knowledge] point to what might be called an information umbrella. Like extended nuclear deterrence, they could form the foundation for a mutually beneficial relationship. The United States would provide situational awareness, particularly regarding military matters of interest to other nations. Other nations, because they could share this information about an event or crisis, would be more inclined to work with the United States. … As its capacity to provide this kind of information increases, America will increasingly be viewed as the natural coalition leader, not just because it happens to be the strongest but because it can provide the most important input for good decisions and effective action for other coalition members. Just as nuclear dominance was the key to coalition leadership in the old era, information dominance will be the key in the information age.”

Martin Libicki added more details to the Nye/Owens’ information umbrella strategy in 1998, which should replace the Cold War nuclear strategy. A “system of systems” should be established and other nations should be granted access to parts of in on a quid-pro-quo basis:

“The quid would be access to the System’s services and data, including feeds (e.g., those covering global flashpoints, movement tracks, ambient conditions), indicators (e.g., crime reports in certain categories, sectoral business activity) and monitors (e.g., traffic, pollution, switch activity). The quo would be, in effect, the System’s access to a nation’s spaces (e.g., a very open skies regime) as well as to extant monitors and databases.”3

Libicki’s calculation, elaborated in a section headlined “The System as Strategy”, apparently was that the “system of system” would be so expensiv and complex and yield network effects galore that no other international contender would be able to trump the US-initiated system:

“The United States can be aggressively generous in giving away its information and access to its structure… (…) [T]he underlying economies of scale in fielding sensors or integrating systems to illuminate the world may yield results similar to what global markets are achieving.”

Giving first shots away for free to first attract users, creating dependencies, increasing value by network effects, and thus raising exit costs over time has been an essential feature of the drug lords and ICT industry ever since. Rephrasing Max Boot on the art of leading an empire (I don’t have the source here right now, hence no cite): To ensure it’s very survival, an empire needs to dry up dangerous competencies of potential rivals. With the creation of an information umbrella, the US establishes itself as global security provider. It’s services can be enjoyed by other nations as long as they relinquish parts of their national sovereignty in security matters to the imperial security system.

And then came that series of events that “represents a failure of intelligence, law enforcement, information management – and technology.”4 With technology the cause, technology was the cure. The Markle Foundation Task Force, a joint working group by Markle Foundation, Brookings Institue and CSIS, was the most comprehensive attempt to contemplate about the use of IT against terrorism in a think tank environment. The goal the task force set itself: “Exploiting America’s IT Advantage.”5 The task force suggested to build an organisational and technical network, in which intelligence agencies, law enforcement, local, state, and federal bureaucracies, the military, and private enterprises would share all that data and information that could potentially be valuable to detect future terrorist attacks. The potential data sources for the envisaged “Systemwide Homeland Analysis and Resource Exchange Network” (SHARE) were countless:

„[I]mportant information or analytical ability resides not just in the 14 intelligence components of the federal government and federal law enforcement and security agencies, but also with the 17,784 state and local law enforcement agencies, 30,020 fire departments, 5,801 hospitals and the millions of first responders who are on the frontlines of the homeland security effort. Add to this the thousands of private owners and operators of critical infrastructures, who are responsible for protecting potential targets of terrorist attacks, and the many more private companies that may have information in their databases that could lead to the prevention of terrorist activity.”6

The range of information objects that were deemed relevant is no less impressive: It starts with details on „birth, deaths, and marriages“ printed on marriage, birth, death, and divorce certificates (collected by VitalCheck), continues to the categorie „Internet“ with information objects like „file postings“ und „website search history“ (collected by ISPs such as AOL, MSN, Yahoo, CompuServe, EarthLink or search engine providers such as Google, Altavista, MapQuest, and Ebay), to the category „lifestyle interest“ with information objects such as „cable-viewing history“, „product activation“, or „Internet opt-in news sources“, and finally concludes with the category „work force“ mit information objects such as names of persons working on bridgeds, dams, and harbours.

That’s what an illustrious circle from US think tanks, IT industry, academia, intelligence, and media such as James Lewis (CSIS), Craig Mundie (Microsoft), Ashton Carter (then Harvard U, now Dep MoD), Esther Dyson, Amitai Etzioni (old hand thinker who drafted Kennedy’s gradualism and, more recently, the idea of a Global Safety Authority), David J. Farber (Carnegie Mellon U), James Dempsey (CDT), Eric Holder (then Covington & Burling, now Attorney General), Gilman Louie (In-Q-Tel), and Winston Wiley (Booz Allen Hamilton, the company with that by now presumably former employee), to only name a few, came up with in 2002/2003. The idea of IT as a panacea to the ill of terrorism was formed in those months a good ten years ago. Given the way how European authorities and legislations continue to feed the deep data throat on the other side of the pond, the counter-terrorism strategy has been successfully merged with the foreign policy strategy of the information umbrella. 


  1. David Rothkopf, then Visiting Fellow at Carnegie, CEO of Intellibridge, formerly at Kissinger Associates, quoted in: Carnegie Endowment for International Peace (2000). “Cyberpolitik: The Information Revolution and U.S. Foreign Policy.” 22.03.2000. URL: http://www.ceip.org/files/events/cyberpolitik.asp?p=5&EventID=51 (04.05.2004) ↩

  2. Nye, Joseph S., und William A. Owens (1996). “America’s Information Edge.” In: Foreign Affairs 2 (March/April 1996). S. 20-36. URL: http://search.epnet.com/direct.asp?db=bsh&jn=%22FAF%22&scope=site. ↩

  3. Libicki, Martin (1998). “Information War, Information Peace.” In: Journal of International Affairs 2 (Spring 1998). S. 411-428. ↩

  4. Ham, Shane (2002). “Winning with Technology.” In: Blueprint Magazine (16.01.2002). URL: http://www.ppionline.org/ppi_ci.cfm?knlgAreaID=140&subsecID=900017&contentID =250038 (19.05.2004). ↩

  5. Markle Foundation – Task Force on National Security in the Information Age (2002). “Protecting America’s Freedom in the Information Age. Second Report of the Markle Foundation Task Force.” Zoë Baird, James Barksdale Chairmen, Michael A. Vatis Executive Director. October 2002. URL: http://www.markletaskforce.org/documents/Markle_Full_Report.pdf (19.05.2004). ↩

  6. Markle Foundation – Task Force on National Security in the Information Age (2003). “Creating a Trusted Network for Homeland Security. Second Report of the Markle Foundation Task Force.” Zoë Baird, James Barksdale Chairmen, Michael A. Vatis Executive Director. December 2003. URL: http://www.markle.org/news/Report2_Full_Report.pdf (19.05.2004). ↩

Post-Stuxnet market failures and socialisation of risks?  2.2.12

More than a year ago, we’ve learned that Stuxnet would be a game changer. Indeed, no advisor in all things security missed to mention that the alleged U.S.-Israel (Langner) originated hack and blow-up of Iranian Uranium enrichment facilities posed a show-case of future attacks on our beloved infrastructures and industrial production sites. While one might argue that the transfer of the world’s production sites to China serves as a mediator to scare going wild, there are still some Industrial Control Systems implemented and running within, say, the EU or the U.S. With Stuxnet discussed ad nauseam both at security conferences and in global mainstream media, with policy awareness increased up to the level of the leaders of the universe, with calls for decisive policy responses on all policy levels, calls for cyber-defense programmes against prospective attacks in cyber-warfare (by non U.S.-Israel) for national and international critical infrastructure protection programmes – with all that stuff one would assume that at least some of the most obvious steps have been accomplished.

And then you read an update by the commercial community of technical experts on Industrial Control Systems. According to their assessment, the ICS industry acts deaf and akin to the automotive industry in “Fight Club” (mentioned in the scene in which the automotive white-collar insomniac protagonists meets Tyler Durden on the airplane): it’s cheaper to let systems go bust occasionally and pay for some clean-up than to preventively fix the systems. Industrial control systems are still highly buggy, a group of ICS security researchers around the consultancy Digitalbond have tried to showcase at their SCADA Security Scientific Symposium (S4). For experts in the field, this is common knowledge for more than a decade.

The technical ICS geniuses at the S4 conference put all the blame to the vendors, such as Siemens, General Electric, Schneider Modicon, Rockwell Automation, SEL, or Koyo Automation. But is that easy? My experience from general IT, not ICS admittedly, tells me that life is more complicated. Independent consultancies, which are bound to specific vendors, have certainly no incentive to blame existing or prospective customers. More substantially, while there might be customers with inadequate security procedures out there, I highly doubt that knowledge about notorious insecurity of a particular set of artefacts doesn’t exist somewhere in customer companies and doesn’t climb up the communication ladder to the CIOs or CSOs. If owners are not interested in getting their 20-years old ICS fixed, a vendor interested in subsequent orders wouldn’t want to embarrass itself and its clients by being utterly explicit about the risks or the security hick-ups of the installed base of legacy systems.

The financial sector and the nuclear industry serve as nice role models for dealing with, as we institutional-economics-infected researchers call it, negative externalities of societal or technical systems. For both system vendors and owners of such infrastructures, inactivity is a viable option to respond to publication of vulnerabilities. Why would you want to spend millions on hardening your chemical facilities against a rather hostile hack into its control systems? If shit hits the fan, writing off your production site and transferring the external costs to the public is probably the most economic approach. Just make sure that the downfall of one site doesn’t bring down the complete parent group as with this TEPCO guys who failed to install proper economic firewalls inside their group. There are no columns or rows for the rhetoric of cyber-warfare in the Excel sheets on which executive boards of infrastructure owners rely in their decision making. The ongoing installation of insecure systems and components is certainly is worrying.

The great potential realigner of incentives aka public authorities have have remained rather calm on this issue, too. For Europe, Kroes is gunning for “providing the right incentives“, but we don’t know yet what the Commission will come up with eventually. Hohlmaier, rapporteur of the European Parliament on Cybersecurity issues and with a constituency in Siemens land, has been likewise silent on this, Google tells us. Inaction by incompetent or unwilling operators of information and industrial infrastructures might pose risks for the public at large. The public might want to live with some risks. Or prefer to have incentives realigned, i.e. get regulations installed that force vendors, customers or third parties to invest into security measures. For the last couple of years, policy makers, researchers and public authorities have been obsessed with “incentivising” third parties such as ISPs to make up for the failures of vendors and customers of ICT systems. For industrial control systems, I don’t see this option. It’s either the vendors and/or the customers (owners of infrastructures) that need to take the bill. Or learn to live with the risks. Just like we did with financial and nuclear systems.

Open Security Data  22.10.11

The European Commissioner for the Digital Agenda from the Dutch conservative-liberal VVD party, Neelie Kroes, announces an “ambitious EU Open Data Strategy“. It seeks to “encourage more openness and re-use of public sector data” by a Public Sector Information Directive. The Commission is planning to set up an “Open Data portal” for the European Commission, later to be supplemented by a “pan-European Open Data portal”.

This is indeed going to be huge, potentially at least. We have seen plenty of these geeky apps and web sites that make use of publicly available data and create some clever mashups. The usual meme of Open Data advocacy is that it fosters transparency, openness, enhances citizens’ say in public matters and thereby strengthens democracy and what else. For all this open data hipness and siren songs, it remains to be seen whether the advantages will be evenly distributed among citizens, who might receive enhanced or innovative public and non-public services, entrepreneurs entering the markets with some fresh and bright ideas bureaucrats haven’t thought of and ICT behemoths, which most likely will seize the opportunity and kick outtasking into new spheres to sell software, iron and services.

A litmus test to the openness and transparency rhetoric is, as always, the area of security. Will there be a section in COM’s portal labelled “internet security” or “cyber security”? In Brussels, the draft Directive on “judicial cooperation … on combatting attacks against information systems” is still under consideration. Article 15, paragraph 3 states:

Member States shall transmit the data collected according to this Article to the Commission. They shall also ensure that a consolidated review of these statistical reports is published.

Here we have a perfect opportunity for the EC to display its willingness for openness of public sector data. In addition to merely releasing consolidated statistics about the internet-based crimes, a more open approach appears to be perfectly feasible. We still lack reliable, deep knowledge about the scale of the internet security problem. Publicly accessible data will be very helpful to overcome this deficiency and thus to provide the knowledge base for sound political decisions.

Open Data often tends to focus on low-hanging fruits such as geographic data, administrative documents and similar kinds of public service raw data. The one and only area however that truly impacts transparency of governmental action is security. Security is often is grotesquely secretive, security organisation shielded from public scrutiny. With legitimate force entirely concentrated in their hands, these institutions both protect citizens and society, but also, by definition, pose a threat once organisational culture, political oversight and political independence become non-optimal. Hence, democratic governance requires security organisations that are open to public oversight to the maximum degree possible without endangering societal security interests.

While Open Data “merely” requires to add public interfaces to existing data warehouses, Open Security Data admittedly needs a thorough analysis on which data is safe for publication and which isn’t. It shouldn’t be that hard though to make statistical cyber-crime databases public. For a start.

 

Links on states’ recent activities in internet security  29.11.09

UK
UK cybersecurity centre starting operations in March – ZDNet.co.uk
Administered by Cabinet Office; staff partly to be recruited from GCHQ, should have hacker mentality; “primarily … a defensive role “, cyberattack as “last resort”.UK also has an Office of Cyber Security (OCS), set up last summer. UK launches dedicated cybersecurity agency – ZDNet.co.uk Gordon Brown: “we … have to secure our position in cyberspace in order to give people and businesses the confidence they need to operate safely there”
As UK is at it: Digital Economy Bill passed:

Britain’s new Internet law — as bad as everyone’s been saying, and worse. Much, much worse. – Boing Boing Including 3-strikes, stricter video-game ratings, ISPs forced to deliver data with content industry, business secretary gets carte blanche to come up with stricter regulations.
“It’s a declaration of war by the entertainment industry and their captured regulators against the principles of free speech, privacy, freedom of assembly, the presumption of innocence, and competition.” (BoingBoing)

US
The cyberwar plan, not just a defensive game – Nextgov
Stupid headline – who would think that cyber-warfare is about defense only.
„Computerized tools to penetrate an enemy’s phone system“, „computer viruses and malicious software programs that can disable electrical power systems, corrupt financial data, or hijack air traffic control systems“, „cyber-intruders have probed our electrical grid“ (no, not the squirrel terrorists), “we’d have cadres of people who’d know how to do that”, “Military forces fight for the ownership of that domain [cyber-battlefield]”, “Defense Department graduates only about 80 students per year from schools devoted to teaching cyber-warfare”, ” proposed building a military “botnet,” an army of centrally controlled computers to launch coordinated attacks on other machines”. “The risk of losing control of a weapon provides a powerful incentive not to use it”

See also: National Journal Magazine – The Cyberwar Plan

Who’s in Big Brother’s Database? – The New York Review of Books
Degree of surveillance measured in electricity bills: 70 millions per year http://bit.ly/3DwW49

Information Security News: NIST Drafts Cybersecurity Guidance
“tackling criticism that federal cybersecurity regulations have placed too much weight on periodic compliance audits”; “more onus on applying risk management throughout the lifecycle of IT systems”. Yawn.

[ISN] Inside the Ring – Chinese, Russian cyberwarfare
Like nuke-counting in the eighties.
Noteworthy: a new Cyber Security Alliance 14 tech firms form cybersecurity alliance for government — Government Computer News

Australia
Australian government overhauls national cyber security arrangements – Government & Policy “against increasing online espionage and attacks on critical infrastructure”, new CERT Australia, Cyber Security Operations Centre (CSOC), details undisclosed

EU
Automated Social Networking Surveillance Systems Statebook is going to be developed!?

====
How the Internet Ruined Newspapers, TV, Music, Movies, Microsoft – Newsweek 2010, The Internet: A Decade of Destruction – Internet Use/New Technologies „wherever companies were profiting by a lack of transparency or a lack of competition, wherever friction could be polished out of the system, those industries suffered“ – What about national political institutions (in the wider sense) then?