Cyber is a Sisyphean task. Few days pass without the release of new malware, mention of a data breach, or concerns over fake news. Ever increasing quantities of time, money, and energy are spent, often to little effect beside accelerated burn-out rates within cybersecurity teams. Progress seems fleeting: both setbacks and resets are frequent, whether through system faults, newly discovered vulnerabilities, major exploits, technological advances or regulatory imposition.
That makes the goal of the Minister for Home Affairs and Cybersecurity, Clare O’Neil, to make Australia the most cyber secure country in the world by 2030 laudable but a big ask. After all, cyber has grown to encompass not just the stricter definitions of information security and encryption, but misinformation, surveillance, and online fraud, and thence onto assurance of transactions, integrity, and identity—all the elements of being online.
Because of its extension into the social, the economic and the political, cyber is not going to disappear. It is a feature, not a rewritable bug, of our globalised, digitally-enabled, socio-technical world. It cannot be managed through technical or legislative fix, corrected by policy fiat, nor wished away. Nor is it a matter that can be left in the hands of technocrats or national security professionals.
Even predicting what the cyber world will look like in 2030, only seven years away, is chancy. Think about what it was, seven years ago, in 2016. That was before the 2017 WannaCry and NotPetya attacks; before the phishing attacks during the 2016 US Presidential election, and the leaks of 2017 French Presidential election; before the effects of Cambridge Analytica’s involvement in the 2016 Brexit referendum were clear. The 2021 Colonial pipeline attack advertised the broader surge of ransomware attacks that have defined the cyber environment over the last few years. The 2020 SolarWinds hack presaged supply chain vulnerabilities. Russia’s 2022 invasion of Ukraine is reshaping the cyber environment—and expectations of its use—yet again.
Is the minister’s goal of ‘safety’ viable? ‘Safety’ is a softer, more comforting word than ‘security’. It also fits in neatly to what Sir David Omand referred to in 2011 as the ‘protecting state’ that emerged as the dominant paradigm in national security after the September 11 2001 terrorist attacks. Governments, he said, are now expected to protect their publics from major disruptive events—indeed, to anticipate their occurrence—and to ensure systems and society are sufficiently resilient.
The problem is it that such an approach lends itself to risk aversion: any change may be interpreted as potentially disruptive and a challenge to the state. It may imply wresting control and responsibility away from citizens and companies at a time where they need to have greater agency over their wellbeing—a dangerous notion.
And resilience does not come cheap; its cost is too often and too readily transferred to those least able to bear it. Moreover, despite the importance of resilience, policymakers under pressure will tend to dispense attention and funding on the more immediate and tactical—threats and disruptions—rather than the long-term and strategic.
A better way to think about the problem may be to draw on the analogy of public health. It is a useful, albeit rough, metaphor for cyber: both are concerned with health, hygiene, and viruses.
Public health encompasses a breadth of care, activities and and expertise. There’s the personal level—including basic hygiene from an early age and family doctors for regular care and updates. There’re annual vaccinations—which any cyber practitioner would instantly recognise as ‘patching regimes’. Hospitals are available for emergencies and more complex operations. There’s public infrastructure, engineering, and sanitation. Research institutions provide insight and breakthroughs. The whole is enriched through a broad allied health ecosystem and supported by financial institutions and a range of additional disciplines.
That health ecosystem developed over centuries, most recognisably with the modernisation of the state from the 1700s onwards—the rise of the urban classes in Western Europe, an increasing reliance on economic rather than land wealth, and the use of science and investigation.

A human-centered approach to health, one founded in ethics and accountability, is critical. For example, disease often follows in the wake of social deprivation, poverty, inequality and illiteracy. Writing on typhus in the mid-1800s, Rudolf Virchow argued that democracy and social justice were the best way to control such epidemics: ‘[m]edicine is a social science, and politics is nothing more than medicine on a large scale.’
The history of medicine also show how technology and expertise can be abused. At one extreme, there is forced experimentation. Closer to home, there are concerns over access to personal health information, including social media posts, following the decision to overturn the Roe-vs-Wade decision in the United States, and the leaking of personal health data following the Medicare hacks in Australia, with their full consequences yet to emerge.
In short, any form of technology, whether medical or digital or cyber, has very human implications. The technology itself has no implicit value—it is the choices that are made about its design, application, access, use and control. Who gets to make those choices, and how, is a measure of power. There is a difference, for example, in the cyber surveillance being applied to a worker in their workplace, while engaging in commerce or in public commons, and that individual having irrevocable control over their own identity, privacy and user data—even as the tools being used are the same.
And that’s the danger with cyber, left to its own devices, and unmoored from the broader social, economic, and political substrate. Precisely because cyber is so intrusive into people lives, the conduct of commerce, and the nature of society and the structure of our political systems, it needs to be brought out from behind the closed doors of the national security community. Otherwise, the risk is that the privacy of individuals, corporate concerns, societal welfare, and political norms may be discounted, with insufficient oversight or means of recourse, in favour of institutional interests.
That’s not to say that the national security community does not have valid concerns. It would be foolish to disregard the real threat posed by nation-states such as China or Russia—plus several second-tier states—and criminal actors to Australian’s national interests. But it also in the interests of the national security community, as Omand argues, that it retains the trust of the public.
Further, it’s clear that cyber, along with conventional and nuclear weapons, constitutes a third strategic domain, albeit with a different strategic logic. As in the early nuclear age, policymakers are working out how to exercise power in that domain. There’s a sense of urgency, and a conviction that where certainty is lost, it’s to their opponent’s advantage. And that’s potentially dangerous, as it reinforces tendencies of inequality and control.
And so we need a compelling narrative, a vision, for the use of the technology such that it supports, rather than undermines democracy: providing not merely guardrails, but a suite of countervailing powers. Simply using the language of safety, security and protection does not of itself ensure an outcome that upholds democracy—or supports individual or worker rights in the face of increasing automation, growing distrust, or intrusive surveillance.
Moreover, it will not be enough to simply discuss skills. That risks perpetuating a pattern of commoditising people, seeing their value through what they contribute to the enterprise or the state, a thoroughly utilitarian perspective.
More is needed. It must be more than a vision of automation, or surveillance, both of which favour the powerful. It needs to be one that creates new industries, generates new roles and enhances creativity, not one that simply automates the known, undercuts agency, and erodes freedoms. As Daron Acemoglu and Simon Johnson argue, such is the path of ‘so-so productivity’ and undermined democracy.

A better starting point than focusing on the negative consequences of digital technology is the appreciation that the technology is fundamentally human, and so fundamentally political. A second step could be, like the 2019 US Cyberspace Solarium Commission (CSC), taking the time with the broader community to understand and build a consensus around the conceptually new and challenging phenomenon of cyber.
There are, of course, resonances with von Clausewitz: ‘in everything in war is very simple, but the simplest thing is hard.’ And so keeping in mind the public health ecosystem helps, not least as it illustrates the complexity of the ecosystem we need to manage cyber and the challenges getting there. The public health ecosystem offers a touchstone for the use of appropriate technology, as one that supports, not erodes democracy.
And it better supports a notion of resilience that is arguably more suited to the slings and arrows of an uncertain strategic environment than barring doors and shutting windows—after all, we accept that it’s better to be healthy and active, than shut away but ‘safe’. To borrow from Dune’s Maud’Dib, the clear, safe path ‘leads ever down into stagnation.’