Control, Not Comfort
Who Really Holds the Kill Switch in Your Technology Stack?
This piece has been forming in the background for a while.
The ICC sanctions incident made something visible that has been structurally true for years: we do not fully control the infrastructure we depend on.
This is an attempt to articulate what that means for how we think about security, resilience, and dependency.
In January 2025, the United States sanctioned the International Criminal Court and its staff. Within days, Karim Khan, the ICC’s chief prosecutor, found himself locked out of his Microsoft email. No hack. No breach. Just a policy decision in Washington, executed through infrastructure the ICC had trusted implicitly.
This was not an isolated incident; sanctions have been levelled against organisations and individuals before. But it was sobering for another reason: the target could have been anyone in Europe. An ally, suddenly at the receiving end of hostile sanctions by another ally.
For decades, organisations worldwide have built their digital operations on a foundation of US technology: Microsoft for productivity, Google for collaboration, AWS and Azure for cloud infrastructure, Zoom and Teams for communication, and now we are growing dependant on AI frontier models. This was treated as a neutral technical choice, driven by functionality, integration, and the path of least resistance.
It was never neutral. The assumption that it would not matter has now been tested in public.
The Revelation That Should Not Have Been Surprising
The sanctions against the ICC and its staff exposed something that security professionals and geopolitical analysts had warned about for years: the extraterritorial reach of US jurisdiction, combined with the dominance of US technology firms, creates a dependency that can be weaponised unilaterally, without judicial process, and against parties the US itself previously treated as legitimate. A technology “kill-switch” that can be applied to anyone when the state wills it.
What made this case particularly instructive was the target. The ICC is not some adversarial entity. It is an institution that Western democracies helped establish, that many US allies actively support, and that was acting within its legal mandate. The sanctions were not levelled against a mutual enemy but against an organisation pursuing accountability that the current US administration found inconvenient.
This broke the implicit assumption that had sustained the status quo: that the US would only exercise these capabilities against “shared” threats.
We Allowed This Risk to Be Lopsided
Here is the uncomfortable truth that the current scramble for digital sovereignty obscures: we knew and ignored it.
The Snowden revelations in 2013 demonstrated unambiguously that the NSA was conducting mass surveillance on allied leaders. That US technology companies were either cooperating or being compelled to cooperate through programmes like PRISM. That the “Five Eyes” arrangement meant this was coordinated behaviour among Anglophone intelligence services. That the legal frameworks ostensibly protecting non-US persons were essentially theatre.
The response was telling. There was public outrage, some diplomatic friction, a few symbolic gestures, and then a remarkably rapid return to business as usual. European governments continued procuring US technology. Corporations continued migrating to US cloud platforms. The fundamental architecture of dependency not only persisted but deepened as cloud adoption accelerated through the 2010s.
Why? Because there was a collective choice, not always conscious or articulated, to treat the Snowden revelations as an intelligence problem rather than an infrastructure problem. The framing became “the Americans spy on us” rather than “we have built systems that structurally enable foreign powers to surveil and potentially disrupt our operations.”
We allowed this risk to be lopsided, working under the flawed assumption that “friendly” governments would not weaponize it against us. Or perhaps more accurately, we chose comfort over confrontation, convenience over sovereignty, the path of least resistance over strategic autonomy.
Until it started to hurt.
Surveillance Capability vs Control Capability
Snowden revealed surveillance capability. The sanctions cases reveal something different: control capability. The distinction matters.
Surveillance is passive in its immediate effect. Your operations continue. You may be compromised, but you are not disabled. You can even maintain the polite fiction that you do not know about it.
Sanctions-driven service termination is active and undeniable. You cannot log into your email. Your video conferences will not connect. Your cloud storage becomes inaccessible. There is no ambiguity, no deniability, and no option to simply continue as before. For individuals it is life changing. For corporations or organisations this becomes mission critical; loss of access to cloud storage or emails means complete operational paralysis.
This is what has finally broken through the wilful blindness. Not the abstract knowledge that the US could exercise control, but the concrete experience of that control being exercised against actors that European and international institutions considered legitimate.
The Dependency Map Most Organisations Have Not Drawn
To understand the exposure, consider where US control points exist in a typical organisation’s digital operations.
Identity and Access is perhaps the most critical layer. Microsoft Entra ID (formerly Azure AD), Okta, and similar US-based identity providers often control who can access what. If your identity provider is sanctioned or directed to cut you off, your staff may find themselves locked out of everything simultaneously.
Communication is the most visible layer. Email (Microsoft 365, Google Workspace), messaging (Slack, Teams), and video conferencing all flow through US-controlled infrastructure. The Khan case demonstrated this directly. France took action.
Productivity and Collaboration encompasses documents created in US-controlled formats, stored on US-controlled cloud infrastructure, with US-controlled sharing mechanisms. Adobe Cloud, Zoom, Salesforce, Monday: the list extends well beyond the obvious names. Even self-hosted instances depend on licensing and update mechanisms that remain US-controlled. Add AI-enabled features and sensitive data begins flowing to processing infrastructure you may never have assessed.
Development and Operations is often overlooked. GitHub (owned by Microsoft), AWS, Azure, Google Cloud, and the npm/PyPI ecosystems all have US nexus. An organisation that builds on these platforms may find its deployment pipeline or code repositories inaccessible. The rise of AI-assisted development tools, most of which are tightly integrated with these same platforms, only deepens the dependency.
Financial Infrastructure extends beyond software. Payment processing, banking relationships, and SWIFT access have all demonstrated susceptibility to US secondary sanctions. Francesca Albanese, a UN Special Rapporteur, fell victim to US sanctions in ways that illustrate how deeply this reaches into daily life: inability to claim health insurance reimbursements, disrupted access to financial services in Europe through a globally integrated banking system, and denial or severe restriction from various US-based digital platforms and services. She has described herself as having been made a “non-person.”
Most organisations have not mapped these dependencies comprehensively. They know they use Microsoft or Google, but they have not traced the chain to understand where a single decision in Washington could sever multiple critical functions simultaneously.
Clearly, no one expects to be sanctioned and I mention it because it is easy to understand, but sanctions are only one tool. Trade restrictions, tariffs, export controls, and taxation have all become instruments of US foreign policy, and the current administration has shown willingness to deploy them broadly. The question of digital sovereignty can no longer be separated from the broader reality of economic coercion. And the implications extend beyond direct action: organisations may find themselves self-censoring, pre-emptively withdrawing from platforms, or constraining their own operations to avoid entanglement. Whether the restriction is externally imposed or self-inflicted, the outcome is the same: diminished autonomy.
The OneDrive Realisation: When “Local” Is Not Local
These dependencies are not abstract policy concerns. They manifest in the routine operations of every connected device.
A recent experience crystallised this. Converting PDF files on a laptop, a purely local operation with temporary files intended for immediate deletion, I noticed unusual network activity. Investigation revealed that OneDrive, configured by default, was syncing every temporary folder to the cloud as fast as the conversion process created them. It prompted a reflection on how opaque cloud storage actually is to the end user.
Files may be stored, deleted and subsequently forgotten by the user, but in the backend, numerous things have happened. Those temporary files will have left their imprint on logs and repositories, and what actually happens on Microsoft’s infrastructure is not transparent to us. The principle that “deletion means deletion” does not hold in cloud environments. What you experience as deletion is better understood as a request to remove something from your view. What persists on the backend is outside your knowledge and control.
At its core, I am well aware that my OneDrive files are stored in the cloud as well as locally. But the realisation was more uncomfortable: in a cloud-centric operating model, even ephemeral and local activities are not spared. This is not a bug. It is the deliberate design of modern operating systems. Cloud synchronisation is the default. Local-only storage requires active configuration. The boundary between “my device” and “the cloud” is intentionally blurred. The user experience is optimised for seamlessness rather than transparency.
The implications extend further. All files on OneDrive are accessible to US law enforcement or security agencies presenting Microsoft with appropriate credentials. Photographs and images are scanned using facial recognition technology for grouping purposes, including CSAM detection through hash matching. Although seemingly innocent and justifiable, this necessarily involves processes for managing false positives and human review of flagged content.
For organisations handling sensitive personal, financial, or legal information, this represents uncontrolled data exposure. The review processes, the staff conducting them, and the criteria being applied are entirely outside your risk management framework.
And then there is BitLocker. Windows’ encryption feature, marketed as allowing the end user to protect their own data through client-side encryption, has a default configuration where recovery keys are escrowed to your Microsoft account, cared for by Microsoft unless users actively opt out. This means your encryption is only as strong as Microsoft’s willingness and legal ability to refuse demands for those keys.
Recent reports (January 2026) confirmed that Microsoft has already complied with US law enforcement warrants, providing the FBI with BitLocker recovery keys that allowed investigators to unlock encrypted laptops. Microsoft confirmed it receives approximately 20 requests for BitLocker keys annually and, with valid warrants, provides agencies with keys to decrypt user data. The company stated it will continue to comply with court orders when it has access to those keys. This is no longer theoretical risk. It has been made very real under a plethora of legal texts including the CLOUD Act, FISA, the USA PATRIOT Act and USA FREEDOM Act, and the Stored Communications Act.
But lawful access is not the only concern. Johns Hopkins cryptography professor Matthew Green raised a more fundamental alarm: what happens when Microsoft’s cloud infrastructure gets breached1? Microsoft has suffered multiple significant security incidents in recent years. If attackers compromise those servers and exfiltrate recovery keys, they gain decryption capability for every device whose keys were stored there. They would still need physical access to the drives, but that is cold comfort for organisations with laptops that travel, get lost, or get stolen. The “Microsoft is safe” assumption does double duty: we trust they will resist improper requests, and we trust their infrastructure is secure enough to protect the keys we have handed them. Both assumptions are increasingly difficult to defend.
For users whose threat model is “someone steals my laptop,” this is adequate. For users whose threat model includes “a foreign government with legal authority over my technology vendor,” this is not encryption in any meaningful sense.
The Cybersecurity Profession’s Blind Spot
This creates a threat model that most organisations have not adequately addressed. Traditional cybersecurity focuses on malicious actors attempting unauthorised access. What we are now confronting is the risk of authorised access being access or revoked by a trusted vendor acting under legal compulsion from a foreign government, having significant considerations for cybersecurity and operational resilience. And what the Karim Khan case also shows is that foreign government necessarily includes your allies. Lord Palmerston in an 1848 speech to the House of Commons famously said that in international geopolitics there are “no permanent enemies, and no permanent friends, only permanent interests”. One needs to attentively heed this advice.
Therefore, what we see here is not a vulnerability in the conventional sense. It is a feature of the architecture working as designed, just not in your favour, nor necessarily that of your organisation or nation.
The cybersecurity profession developed increasingly sophisticated frameworks for thinking about threats, vulnerabilities, and controls. But these frameworks largely excluded the category of risk now materialising.
Vendor risk management became a discipline, but it focused on whether vendors had adequate security practices, not on whether vendors could be compelled by their governments to act against customer interests. Supply chain security emerged as a concern, but primarily regarding malicious code injection rather than jurisdictional control. Threat intelligence developed elaborate taxonomies of nation-state actors but treated them as external attackers rather than potential controllers of your own infrastructure.
The assumption embedded in these frameworks was that your vendors were on your side. That assumption was always questionable after Snowden. It is now demonstrably false.
A genuinely updated cybersecurity posture would need to:
Incorporate “vendor-state threat” as a first-class category alongside nation-state attackers, criminals, and insiders
Develop controls and mitigations specifically for jurisdictional risk
Recognise that compliance with foreign legal demands is a threat vector, not merely a vendor’s legal obligation
Build resilience against service termination as seriously as resilience against service compromise
This is a substantial reorientation.
The “No Alternative” Thought Terminator
When I raise these issues with clients and colleagues, the most common response is: “There is no real alternative to Microsoft Office.” Think of a key, massively large vendor with a more or less locked ecosystem surrounding the product (Adobe, Apple, IBM, and others) and you can easily follow the logic of why security professionals accept certain types of vendor risks.
This reasoning functions as a thought-terminating cliché that short-circuits strategic analysis. What people usually mean is one or more of the following:
I am familiar with Microsoft Office and do not want to learn something else
My organisation has built workflows or has technological lock-in that would require significant effort and costs to change
The people I exchange documents with use Microsoft Office and I worry about compatibility
I have not actually evaluated alternatives recently
The alternatives I tried years ago were inadequate and I assume nothing has changed
None of these are the same as “no alternative exists.” They are statements about switching costs, familiarity, inertia, and assumptions. These are real factors, but they are costs to be weighed against benefits, not immutable constraints.
The French government looked at 2.5 million civil servants and concluded that the switching costs were worth bearing. They have mandated migration from US-based video conferencing tools to a homegrown alternative called Visio by 2027. The Austrian military concluded that LibreOffice was adequate for their needs and completed migration across 16,000 workstations by September 2025.
These are not small or unsophisticated organisations engaging in symbolic gestures. They are serious institutions that evaluated the trade-offs and reached different conclusions than the “no alternative” framing suggests.
Conclusion: Control, Not Comfort
For years, most organisations treated their technology stack as a neutral utility layer. Cloud, identity, collaboration, and infrastructure were selected on functionality, cost, and convenience. The assumption beneath those choices was simple: the core providers we rely on are stable, aligned, and unlikely to act against our interests.
That assumption, as we have seen, is no longer defensible as a security premise.
The core issue is not surveillance alone. It is control. Modern organisations operate on infrastructure where access, identity, licensing, and availability can be altered by entities outside their jurisdiction and outside their governance. This is not a hypothetical vulnerability. It is a structural property of how contemporary digital services are delivered and regulated.
Encryption, vendor risk management, and compliance frameworks address parts of this exposure but do not remove it. Client-side encryption can reduce the risk of disclosure. It cannot ensure continued access. A well-secured tenant can still be locked. A compliant organisation can still lose service. The technical controls that protect confidentiality do not automatically protect continuity.
This does not mean wholesale disengagement from global technology providers is practical or desirable. It does mean that dependency must be understood as a security and resilience concern, not merely a procurement choice. The question is no longer whether organisations rely on external platforms. They do. The question is whether that reliance is mapped, deliberate, and governed with the same seriousness applied to other critical risks.
A robust posture begins with clarity. Know which functions cannot fail. Know which data cannot be exposed. Know which systems you cannot operate without for even a short period. Then examine where control over those functions and systems actually resides. In many cases it will sit with providers operating under legal and political authorities that may diverge from your own. That is not an accusation. It is a condition of the environment. And it will be uncomfortable.
From there, the task is architectural rather than rhetorical. Separate confidentiality from availability in your risk model. Retain independent control of critical keys and data where disclosure matters. Ensure that essential operations can continue, at least temporarily, if a major provider becomes unavailable. Prefer formats and systems that allow migration under pressure rather than only under ideal conditions. Document where dependency is accepted and where it is being reduced.
None of this produces absolute sovereignty. Absolute sovereignty in a globally integrated digital ecosystem is unrealistic. What it produces is awareness and agency. It replaces inherited trust with explicit assessment. It reduces the likelihood that a single external decision can halt internal operations. It turns dependency from an invisible assumption into a managed variable. Territory you can defend to your board.
Security has long focused on preventing unauthorised access. The next phase requires equal attention to authorised control exercised from outside the organisation’s sphere of influence. Institutions that adapt their architecture and procurement accordingly will be more resilient. Those that continue to treat their infrastructure as politically and legally neutral will remain exposed to decisions made elsewhere.
The Decisions We Have Avoided
Once exposure is understood, the instinct is to reach for a remediation plan. Resist that instinct. The first task is not to fix but to decide.
Modern organisations do not operate on infrastructure they fully control. They operate on infrastructure made available to them under legal, commercial, and political conditions that can change. That reality cannot be engineered away. It can only be confronted.
The question is not whether you depend on external platforms. You do. The question is what happens to your operations if the entities controlling those platforms no longer share your assumptions, priorities, or constraints. Not in theory. In practice.
This forces a set of decisions that most organisations have never made explicitly. Which functions must continue under almost any circumstances. Which data must remain confidential regardless of jurisdictional pressure. Which systems can tolerate interruption, and for how long. Which dependencies are strategic, and which are merely convenient.
This sits outside traditional disaster recovery planning. RPO and RTO assume failure followed by recovery. They assume outages, not exclusion. They assume restoration is possible. They do not account for access being deliberately revoked.
Today, in most environments and from what I have seen, these distinctions do not exist. Dependencies accumulate through integration and familiarity rather than intent. The result is an undifferentiated estate in which critical and non-critical functions share the same control points and the same vulnerabilities. Everything works, until something does not. At that point, the absence of prior decisions becomes visible.
The purpose of any response is not technological isolation. It is clarity about where control resides and whether that location aligns with your tolerance for disruption or disclosure. Different organisations will draw the lines differently. A government agency, a law firm, a multinational corporation, and a research institution will reach different conclusions. What matters is that the conclusions are deliberate rather than inherited.
What this moment demands is not a checklist but a classification. Not every dependency carries the same consequence. Not every function requires the same degree of independence. But without explicit decisions about which is which, architecture becomes a record of convenience rather than intent.
Final Note
Organisations that understand where control resides and plan for its potential loss will retain operational agency when conditions change. Those that continue to treat their infrastructure as neutral and permanently aligned may eventually discover that the systems they rely on were never entirely theirs to control.
The central question is simple and uncomfortable: if access to key systems were restricted tomorrow by forces outside your control, what would actually stop, and are you prepared for that answer?
Dependency in a globally integrated digital ecosystem is unavoidable. Unexamined dependency is not.
See Matthew Green’s thread on Bluesky



