Empire or revolution. In the early Naughties, Iraq had just been liberated and occupied, debates in international relations were dominated by contemplations about what already was or would soon become the American Empire. At long last, the colonnades of mighty columns that hold the architraves, friezes, and pediments of official buildings in D.C. would not just symbolize the perpetuation of a cultural heritage that roots in the Greek cradle of Western culture. They would from now on reflect that Washington had become Rome 2.0. At the same time, geeks and techno-optimists were hailing the endless possibilities of information technology, the Internet, and the changes it would bring to the world and its societies. There was something mutually exclusive about theses two discourses. How would an alleged empire respond to something that allegedly had the capacity to turn the word upside down? Ignore it? Crush it? Embrace and extend it? Those were the questions I tried to address some ten years ago.
I had a few lively days here in Dublin. Not only could I escape the mind-softening heat in central Europe and enjoy Ireland’s more bracing climate. The European Commission’s arm for all things digital, DG Connect (formally: the European Commission Directorate General for Communications Networks, Content and Technology), and the Irish EU Presidency had invited to this year’s Digital Agenda Assembly to reflect on the progress of and further opportunities for the Digital Agenda for Europa. The first half of the two-days event was packed with seven all-day workshops, and I had the pleasure to share a panel with a number of accomplished gentlemen (this perfectly represents the piteous excess of men in the infosec scene) in the workshop on “Building an open, safe & secure cyberspace.” Giuseppe Abbamonte, the author of substantial parts of Europe’s cyber-strategy and the Impact Assessment that accompanies the proposed Directive on Network and Information Security, convincingly explained the reasons for the need for a European approach: Only ten member states had developed a convincing strategy against cybercrime, the rest of the pack lags terribly behind. Frederic Martinez of Alcatel Lucent shared details from the trenches. MEP Malcolm Harbour added insights from the European Parliament. Nick Coleman, IBM’s Head of Global Cyberintelligence, talked about responsibilities and processes. I then played the role of the academic and did what we are best at: raising questions and doubts, widening the perspective, and thereby provide ideas that are hopefully not applicable in the office the next day.
Here’s roughly what I’ve talked about:
Following the contributions from representatives from industry, policy making and regulatory authorities, I’d like to address two things in my statement:
First, I give a brief summary over my field of research, i.e. Internet and network security from an political, economic, and organisational angle, share some commons wisdoms of that field, but also highlight some issues where we can’t give contribute substantial knowledge yet.
Secondly, I want to contextualise the proposed NIS directive in the wider context of our search for appropriate forms for the governance and provisioning of internet security.
It is save to say that in our field of research, it is widely accepted knowledge that the incentives among all actors are misaligned. Those actors who have the technical and organisations capabilities to mitigate ongoing attacks, invest in mechanisms to prevent them in the first place and help to increase the overall resilience of ICT systems often have too little economic incentives to actually intervene and help improve the situation. Everyone has reasons to ignore the need to step up in the cybersecurity game until they are themselves hit by an attack. Vendors of software and hardware, Internet services and hosting providers, end users, and even police forces have plausible reasons why security is not high up in their agenda, even if things might have slightly changed here in the last few years. The ensuing scientific discussion has therefore focussed on how to raise incentives for actors who can make a difference in NIS. ISPs were soon identified as a potential regulatory object, as they appear to have capabilities required to mitigate ongoing incidents.
But there certainly still are a quite a number of puzzles to which our field of research can’t make sufficient contributions. And as our search for good regulatory interventions will go on for a while, we might want to answer them. Good regulation should ideally be based on facts, not the unkwown unknown. Among the the questions whe have no suffient answers to yet are:
* Which intermediaries act responsibly and help to respond to ongoing attacks and structural, long-term risks?
* Which containment strategies against botnets or malware work best?
* Which owners of networks are negligent when it comes to security and which set good examples?
These admittedly are very specific questions. But we also have some wider, more general puzzles that need to be solved. The by and large discerning Impact Assesment, which accompanied the NIS directive proposal and was prepared by the Commission staff, has highlighted the previous (and still existing) voluntary approach to cybersecurity as a partial failure. My inner researcher however would consider the generalised statement that the voluntary approach has failed not as proven knowledge, but rather as a hypothesis. It leaves unanwered which elements and institutions of that voluntary approach to security governance have not worked? And which have? And why?
Before we kiss these voluntary approaches goodbye and replace them with public capabilities and institutions, we’d better have some answers to these questions.
This leads us—and this is my second point—to a fundamental issue, potentially the most momentous of all internet politics issues: Which institutions do we want for internet security governance? How do we want to govern and provision it in the future? And which modalities of sharing do we want to use?
Internet security governance and production is a wicked game. It is is such a tricky thing for a number of reasons:
a) It’s about security. And security policy usually involves force, enforcement, and secrecy. None of these factors fit particularly well to the much heralded ideals of transparency and openness.
b) It’s a transnational issue. The distributiveness of the problem, of incidents, of systems involved, of perpetrators and attackes, of actors required for mitigation require global solutions.
c) It mingles foreign with domestic security, and foreign policy with public policy. The practices of foreign and national security have traditionally differed from those in the domain of homeland security. The transfer of the former substantially changes the latter and our societies.
d) All of that results in a potentially precarious state of legitimacy of internet security policies.
So how do we govern for internet security? And which institutions for sharing do we want?
To give you an idea of the range of possibiliities that might be applied or are applied, I’d like to describe two ideal-type approaches of institutions for internet security. The types fundamentally differ in their inner organisation and governance model, their legitimacy model, their access restrictions, their use of hierarchies, their application of coercion, their scaleability and flexibility, the role of trust and authority.
The first type is called the “information hegemony”. The information hegemon achieves all-encompassing situational awareness by technical and organisation means. His superior knowledge is shared with like-minded allies, who in turn share their proprietary knowledge and data with the hegemon, which results in an even broader picture for the hegemon. The hegemon is equipped with informational resources and technologies that allow him to identify and mitigate security threats irrespective their geographical location.
The second model is a global network of communities of experts. These experts come from different constituencies, mostly IT operations, but also from law enforcement, police, or CERTs. Members of these communities share information and collaborate on certain technologies, internet services, geographies, or actual incidents. They are self-governed, bottom-up, distributed. Access to them however is restricted and depends on existing trust-relationships with existing members.
These are the ideal-type and at least partially existing governance models that are in place to increase cybersecurity.
As a closing remark, let me add a few words on the NIS directive proposal. The proposed EU-model would establish a new security network, but one that differs from existing Internet security communities set up by technical experts. The NIS directive proposes a “cooperation network”, in which the Commission and the planned national “competent authorities” (possibly addenda to existing national CERTs), share information on risks and actual attacks. The “cooperation network” will certainly help to overcome some of the knowledge problems I’ve described above. (And the envisaged Adcanced Cyber Defence Centre, which aims at nothing less but getting rid of botnets and bots, will help here, too.) The will help raising the security standards in public adminstrations and some businesses that have so far not invested in the resilience of their networks. So the directive might very well be a nucleus for improvement.
But, in the light of recent events, we also need to make sure that these new state-controlled capabilities don’t pave the way for a slippery slope into worse. Security institutions always bear the risk of becoming a risk to other aspects of security. My hunch is that our existing capabilites to oversee security institutions and bind them to the public will are insufficient. Especially in the emerging domain of public NIS institutions.
“We are the leaders, we can be the information hegemon.” (David Rothkopf)1
Well, who would be surprised that NSA apparently sniffs the hell out of the databases located on data centers on U.S. soil, operated by American companies. The writing has been on the wall for at least fifteen years. Numerous high level persons have said enough for anyone to connect the dots. The strategy is obvious, has always been. It might surely help to discover some terrorists. It also helps to keep your hegemony going smoothly for another while. Informational supremacy supplements US dominance in military affairs, global political institutions, currency and financial markets, and global cultural affairs. It is playing its game very nicely. Accidents happen, but
An eye-opener was Joseph Nye’s “America’s Information Edge”, co-authored with William Owens and published in Foreign Affairs in March/April 1996.2 While the article focussed on military information systems, it’s blending of military dominance based on superior information system with information-based soft power spurred imaginations of how else information systems can be used to foster a nation’s relative power in global politics. Enter the information umbrella.
“These capabilities [dominant situational knowledge] point to what might be called an information umbrella. Like extended nuclear deterrence, they could form the foundation for a mutually beneficial relationship. The United States would provide situational awareness, particularly regarding military matters of interest to other nations. Other nations, because they could share this information about an event or crisis, would be more inclined to work with the United States. … As its capacity to provide this kind of information increases, America will increasingly be viewed as the natural coalition leader, not just because it happens to be the strongest but because it can provide the most important input for good decisions and effective action for other coalition members. Just as nuclear dominance was the key to coalition leadership in the old era, information dominance will be the key in the information age.”
Martin Libicki added more details to the Nye/Owens’ information umbrella strategy in 1998, which should replace the Cold War nuclear strategy. A “system of systems” should be established and other nations should be granted access to parts of in on a quid-pro-quo basis:
“The quid would be access to the System’s services and data, including feeds (e.g., those covering global flashpoints, movement tracks, ambient conditions), indicators (e.g., crime reports in certain categories, sectoral business activity) and monitors (e.g., traffic, pollution, switch activity). The quo would be, in effect, the System’s access to a nation’s spaces (e.g., a very open skies regime) as well as to extant monitors and databases.”3
Libicki’s calculation, elaborated in a section headlined “The System as Strategy”, apparently was that the “system of system” would be so expensiv and complex and yield network effects galore that no other international contender would be able to trump the US-initiated system:
“The United States can be aggressively generous in giving away its information and access to its structure… (…) [T]he underlying economies of scale in fielding sensors or integrating systems to illuminate the world may yield results similar to what global markets are achieving.”
Giving first shots away for free to first attract users, creating dependencies, increasing value by network effects, and thus raising exit costs over time has been an essential feature of the drug lords and ICT industry ever since. Rephrasing Max Boot on the art of leading an empire (I don’t have the source here right now, hence no cite): To ensure it’s very survival, an empire needs to dry up dangerous competencies of potential rivals. With the creation of an information umbrella, the US establishes itself as global security provider. It’s services can be enjoyed by other nations as long as they relinquish parts of their national sovereignty in security matters to the imperial security system.
And then came that series of events that “represents a failure of intelligence, law enforcement, information management – and technology.”4 With technology the cause, technology was the cure. The Markle Foundation Task Force, a joint working group by Markle Foundation, Brookings Institue and CSIS, was the most comprehensive attempt to contemplate about the use of IT against terrorism in a think tank environment. The goal the task force set itself: “Exploiting America’s IT Advantage.”5 The task force suggested to build an organisational and technical network, in which intelligence agencies, law enforcement, local, state, and federal bureaucracies, the military, and private enterprises would share all that data and information that could potentially be valuable to detect future terrorist attacks. The potential data sources for the envisaged “Systemwide Homeland Analysis and Resource Exchange Network” (SHARE) were countless:
„[I]mportant information or analytical ability resides not just in the 14 intelligence components of the federal government and federal law enforcement and security agencies, but also with the 17,784 state and local law enforcement agencies, 30,020 fire departments, 5,801 hospitals and the millions of first responders who are on the frontlines of the homeland security effort. Add to this the thousands of private owners and operators of critical infrastructures, who are responsible for protecting potential targets of terrorist attacks, and the many more private companies that may have information in their databases that could lead to the prevention of terrorist activity.”6
The range of information objects that were deemed relevant is no less impressive: It starts with details on „birth, deaths, and marriages“ printed on marriage, birth, death, and divorce certificates (collected by VitalCheck), continues to the categorie „Internet“ with information objects like „file postings“ und „website search history“ (collected by ISPs such as AOL, MSN, Yahoo, CompuServe, EarthLink or search engine providers such as Google, Altavista, MapQuest, and Ebay), to the category „lifestyle interest“ with information objects such as „cable-viewing history“, „product activation“, or „Internet opt-in news sources“, and finally concludes with the category „work force“ mit information objects such as names of persons working on bridgeds, dams, and harbours.
That’s what an illustrious circle from US think tanks, IT industry, academia, intelligence, and media such as James Lewis (CSIS), Craig Mundie (Microsoft), Ashton Carter (then Harvard U, now Dep MoD), Esther Dyson, Amitai Etzioni (old hand thinker who drafted Kennedy’s gradualism and, more recently, the idea of a Global Safety Authority), David J. Farber (Carnegie Mellon U), James Dempsey (CDT), Eric Holder (then Covington & Burling, now Attorney General), Gilman Louie (In-Q-Tel), and Winston Wiley (Booz Allen Hamilton, the company with that by now presumably former employee), to only name a few, came up with in 2002/2003. The idea of IT as a panacea to the ill of terrorism was formed in those months a good ten years ago. Given the way how European authorities and legislations continue to feed the deep data throat on the other side of the pond, the counter-terrorism strategy has been successfully merged with the foreign policy strategy of the information umbrella.
David Rothkopf, then Visiting Fellow at Carnegie, CEO of Intellibridge, formerly at Kissinger Associates, quoted in: Carnegie Endowment for International Peace (2000). “Cyberpolitik: The Information Revolution and U.S. Foreign Policy.” 22.03.2000. URL: http://www.ceip.org/files/events/cyberpolitik.asp?p=5&EventID=51 (04.05.2004) ↩
Nye, Joseph S., und William A. Owens (1996). “America’s Information Edge.” In: Foreign Affairs 2 (March/April 1996). S. 20-36. URL: http://search.epnet.com/direct.asp?db=bsh&jn=%22FAF%22&scope=site. ↩
Libicki, Martin (1998). “Information War, Information Peace.” In: Journal of International Affairs 2 (Spring 1998). S. 411-428. ↩
Ham, Shane (2002). “Winning with Technology.” In: Blueprint Magazine (16.01.2002). URL: http://www.ppionline.org/ppi_ci.cfm?knlgAreaID=140&subsecID=900017&contentID =250038 (19.05.2004). ↩
Markle Foundation – Task Force on National Security in the Information Age (2002). “Protecting America’s Freedom in the Information Age. Second Report of the Markle Foundation Task Force.” Zoë Baird, James Barksdale Chairmen, Michael A. Vatis Executive Director. October 2002. URL: http://www.markletaskforce.org/documents/Markle_Full_Report.pdf (19.05.2004). ↩
Markle Foundation – Task Force on National Security in the Information Age (2003). “Creating a Trusted Network for Homeland Security. Second Report of the Markle Foundation Task Force.” Zoë Baird, James Barksdale Chairmen, Michael A. Vatis Executive Director. December 2003. URL: http://www.markle.org/news/Report2_Full_Report.pdf (19.05.2004). ↩
More than a year ago, we’ve learned that Stuxnet would be a game changer. Indeed, no advisor in all things security missed to mention that the alleged U.S.-Israel (Langner) originated hack and blow-up of Iranian Uranium enrichment facilities posed a show-case of future attacks on our beloved infrastructures and industrial production sites. While one might argue that the transfer of the world’s production sites to China serves as a mediator to scare going wild, there are still some Industrial Control Systems implemented and running within, say, the EU or the U.S. With Stuxnet discussed ad nauseam both at security conferences and in global mainstream media, with policy awareness increased up to the level of the leaders of the universe, with calls for decisive policy responses on all policy levels, calls for cyber-defense programmes against prospective attacks in cyber-warfare (by non U.S.-Israel) for national and international critical infrastructure protection programmes – with all that stuff one would assume that at least some of the most obvious steps have been accomplished.
And then you read an update by the commercial community of technical experts on Industrial Control Systems. According to their assessment, the ICS industry acts deaf and akin to the automotive industry in “Fight Club” (mentioned in the scene in which the automotive white-collar insomniac protagonists meets Tyler Durden on the airplane): it’s cheaper to let systems go bust occasionally and pay for some clean-up than to preventively fix the systems. Industrial control systems are still highly buggy, a group of ICS security researchers around the consultancy Digitalbond have tried to showcase at their SCADA Security Scientific Symposium (S4). For experts in the field, this is common knowledge for more than a decade.
The technical ICS geniuses at the S4 conference put all the blame to the vendors, such as Siemens, General Electric, Schneider Modicon, Rockwell Automation, SEL, or Koyo Automation. But is that easy? My experience from general IT, not ICS admittedly, tells me that life is more complicated. Independent consultancies, which are bound to specific vendors, have certainly no incentive to blame existing or prospective customers. More substantially, while there might be customers with inadequate security procedures out there, I highly doubt that knowledge about notorious insecurity of a particular set of artefacts doesn’t exist somewhere in customer companies and doesn’t climb up the communication ladder to the CIOs or CSOs. If owners are not interested in getting their 20-years old ICS fixed, a vendor interested in subsequent orders wouldn’t want to embarrass itself and its clients by being utterly explicit about the risks or the security hick-ups of the installed base of legacy systems.
The financial sector and the nuclear industry serve as nice role models for dealing with, as we institutional-economics-infected researchers call it, negative externalities of societal or technical systems. For both system vendors and owners of such infrastructures, inactivity is a viable option to respond to publication of vulnerabilities. Why would you want to spend millions on hardening your chemical facilities against a rather hostile hack into its control systems? If shit hits the fan, writing off your production site and transferring the external costs to the public is probably the most economic approach. Just make sure that the downfall of one site doesn’t bring down the complete parent group as with this TEPCO guys who failed to install proper economic firewalls inside their group. There are no columns or rows for the rhetoric of cyber-warfare in the Excel sheets on which executive boards of infrastructure owners rely in their decision making. The ongoing installation of insecure systems and components is certainly is worrying.
The great potential realigner of incentives aka public authorities have have remained rather calm on this issue, too. For Europe, Kroes is gunning for “providing the right incentives“, but we don’t know yet what the Commission will come up with eventually. Hohlmaier, rapporteur of the European Parliament on Cybersecurity issues and with a constituency in Siemens land, has been likewise silent on this, Google tells us. Inaction by incompetent or unwilling operators of information and industrial infrastructures might pose risks for the public at large. The public might want to live with some risks. Or prefer to have incentives realigned, i.e. get regulations installed that force vendors, customers or third parties to invest into security measures. For the last couple of years, policy makers, researchers and public authorities have been obsessed with “incentivising” third parties such as ISPs to make up for the failures of vendors and customers of ICT systems. For industrial control systems, I don’t see this option. It’s either the vendors and/or the customers (owners of infrastructures) that need to take the bill. Or learn to live with the risks. Just like we did with financial and nuclear systems.
‘Old Karl would die of laughter’ 24.10.11
Fouché is enjoying #Occupy for a snack. The Committee of Public Safety announces:
Hippie global meliorism is Marxism without the House of War. It wills an end that can only be realized through means of violence. Yet they refuse to will the means. If laughter could be projected from an ocean and half a continent away, they’d hear Marx’s disembodied laughter drumming from the British Museum and echoing down their spine with Teutonic clarity. A classical Marxist revolutionary would do something revolutionary. They’d mass at the park, loot the city’s financial district, and then storm the state capital. The hippies did everything backwards: they retreated from the center of political power, abstained from seizing the center of economic power, and massed in an out-of-the-way outdoor drug/farmers market.
Old Karl would die of laughter.
History, but anyhow. Jon Baumgartner, “Computers as Weapons of War”, IO Journal, May 2010, pp. 5-8:
Similar IO attacks could be conducted against nation states that have violated international treaties in order to carry out as uranium enrichment for nuclear weapons. Most of the unauthorized enrichment facilities in these cases are constructed deep underground. Conventional munitions, including bunker busters, could have difficulty penetrating and damaging these hardened structures. Cyber munitions, however, could be used to destroy key equipment used in the enrichment process. One of the primary IO targets would be the gas centrifuges used to create weapons grade uranium. The rotors within these centrifuges operate at extremely high speeds (e.g. 50,000 RPM). A cyber attack that increased the RPMs beyond normal safely levels could result in a catastrophic failure of a single centri- fuge. Implementing this IO attack across thousands of centri- fuges has the potential to disrupt enrichment operations for considerable periods of time.
A couple of months before Stuxnet broke news.
Anonymous cyber terror 23.10.11
Dan Kaplan, SC Magazine:
In my eyes, this seems to be another step by U.S. officials, without exactly coming out and saying it, to label Anonymous as a cyber terrorist organization, bent on indiscriminate destruction of digital property and infrastructure.
The DHS in the “National Cybersecurity and Communications Integrations Center Bulletin”, A-0020-NCCIC / ICS-CERT –120020110916:
“The loosely organized hacking collective known as Anonymous has recently expressed an interest in targeting industrial control systems (ICS). (…) Anonymous’ increased interest may indicate intent to develop an offensive ICS capability in the future.”
Kaplan continues, on Duqu, the alleged Stuxnet offspring:
Which reminds me: I’m waiting for DHS to publish a warning based on a potential real critical infrastructure issue that popped up just yesterday — evidence that the Stuxnet authors are back with new malware. I’m sure the bulletin will arrive any minute now.
Even a year after, Langner sticks to his assessment:
Thinking about it for another minute, if it’s not aliens, it’s got to be the United States.
Open Security Data 22.10.11
The European Commissioner for the Digital Agenda from the Dutch conservative-liberal VVD party, Neelie Kroes, announces an “ambitious EU Open Data Strategy“. It seeks to “encourage more openness and re-use of public sector data” by a Public Sector Information Directive. The Commission is planning to set up an “Open Data portal” for the European Commission, later to be supplemented by a “pan-European Open Data portal”.
This is indeed going to be huge, potentially at least. We have seen plenty of these geeky apps and web sites that make use of publicly available data and create some clever mashups. The usual meme of Open Data advocacy is that it fosters transparency, openness, enhances citizens’ say in public matters and thereby strengthens democracy and what else. For all this open data hipness and siren songs, it remains to be seen whether the advantages will be evenly distributed among citizens, who might receive enhanced or innovative public and non-public services, entrepreneurs entering the markets with some fresh and bright ideas bureaucrats haven’t thought of and ICT behemoths, which most likely will seize the opportunity and kick outtasking into new spheres to sell software, iron and services.
A litmus test to the openness and transparency rhetoric is, as always, the area of security. Will there be a section in COM’s portal labelled “internet security” or “cyber security”? In Brussels, the draft Directive on “judicial cooperation … on combatting attacks against information systems” is still under consideration. Article 15, paragraph 3 states:
Member States shall transmit the data collected according to this Article to the Commission. They shall also ensure that a consolidated review of these statistical reports is published.
Here we have a perfect opportunity for the EC to display its willingness for openness of public sector data. In addition to merely releasing consolidated statistics about the internet-based crimes, a more open approach appears to be perfectly feasible. We still lack reliable, deep knowledge about the scale of the internet security problem. Publicly accessible data will be very helpful to overcome this deficiency and thus to provide the knowledge base for sound political decisions.
Open Data often tends to focus on low-hanging fruits such as geographic data, administrative documents and similar kinds of public service raw data. The one and only area however that truly impacts transparency of governmental action is security. Security is often is grotesquely secretive, security organisation shielded from public scrutiny. With legitimate force entirely concentrated in their hands, these institutions both protect citizens and society, but also, by definition, pose a threat once organisational culture, political oversight and political independence become non-optimal. Hence, democratic governance requires security organisations that are open to public oversight to the maximum degree possible without endangering societal security interests.
While Open Data “merely” requires to add public interfaces to existing data warehouses, Open Security Data admittedly needs a thorough analysis on which data is safe for publication and which isn’t. It shouldn’t be that hard though to make statistical cyber-crime databases public. For a start.
“Hiroshima of cyberwar” 22.10.11
How could I miss that line in Michael J. Gross’ Stuxnet article in the April edition of Vanity Fair:
Stuxnet is the Hiroshima of cyber-war. That is its true significance, and all the speculation about its target and its source should not blind us to that larger reality. We have crossed a threshold, and there is no turning back.
Nice alteration to recently excavated rhetoric corpse of the Digital Pearl Harbour by the Washington Post. “Hiroshima of cyber-war” is an allegory conveying ideas and association probably not intended by the author:
- The dawn of a new age of geopolitics defined by control over certain technological artefacts.
- The assumption by US security circles that unilateral and sole control over these artefacts equals incontestable geopolitical power, a truly “unipolar moment” (Charles Krauthammer) that should have lasted considerably longer than 1949 when the Soviets managed to assemble their “Fat Man” equivalent.
- The militarisation and secretisation of a potentially benevolent technology.
- The institution of a nuclear umbrella which served as a foreign policy instrument and “provided a cooperative structure, linking the United States in a mutually beneficial way to a wide range of friends, allies, and neutral nations.” (Nye/Owens 1996, p. 26)
A Hiroshima of cyberwar?
Hacker, concepts thereof 21.10.11
The Foreign Secretary revealed that Britain has developed new weapons to counter the threat from computer hackers and is prepared to strike first to defend the nation’s infrastructure and businesses. … The Government is investing an extra £650 million to develop deterrents to hostile viruses and hackers.
Joe Grand, grandideastudio.com:
My idealistic view of hacker is someone that is always asking questions, learning and has a thirst for knowledge. A hacker tries things that other people think are impossible and it’s someone that solves problems in a clever way.
Wie der Staatstrojaner zerlegt wurde: Die Hacker vom Chaos Computer Club haben die Überwachungssoftware gefunden, analysiert – und gehackt.
(Reverse engineering a state trojan: Hackers of the CCC found, analysed and hacked the surveillance software.)