<?xml version="1.0" encoding="UTF-8"?><rss xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:content="http://purl.org/rss/1.0/modules/content/" xmlns:atom="http://www.w3.org/2005/Atom" version="2.0" xmlns:itunes="http://www.itunes.com/dtds/podcast-1.0.dtd" xmlns:googleplay="http://www.google.com/schemas/play-podcasts/1.0"><channel><title><![CDATA[TEKK Talk]]></title><description><![CDATA[Where technology meets conversation: insights on cybersecurity, digital strategy, and the future of tech.]]></description><link>https://www.tekk-talk.com</link><generator>Substack</generator><lastBuildDate>Fri, 24 Apr 2026 12:36:33 GMT</lastBuildDate><atom:link href="https://www.tekk-talk.com/feed" rel="self" type="application/rss+xml"/><copyright><![CDATA[Dennis Lindwall]]></copyright><language><![CDATA[en]]></language><webMaster><![CDATA[tekk@substack.com]]></webMaster><itunes:owner><itunes:email><![CDATA[tekk@substack.com]]></itunes:email><itunes:name><![CDATA[Dennis Lindwall]]></itunes:name></itunes:owner><itunes:author><![CDATA[Dennis Lindwall]]></itunes:author><googleplay:owner><![CDATA[tekk@substack.com]]></googleplay:owner><googleplay:email><![CDATA[tekk@substack.com]]></googleplay:email><googleplay:author><![CDATA[Dennis Lindwall]]></googleplay:author><itunes:block><![CDATA[Yes]]></itunes:block><item><title><![CDATA[Control, Not Comfort]]></title><description><![CDATA[Who Really Holds the Kill Switch in Your Technology Stack?]]></description><link>https://www.tekk-talk.com/p/control-not-comfort</link><guid isPermaLink="false">https://www.tekk-talk.com/p/control-not-comfort</guid><dc:creator><![CDATA[Dennis Lindwall]]></dc:creator><pubDate>Mon, 16 Feb 2026 21:57:25 GMT</pubDate><enclosure url="https://substackcdn.com/image/fetch/$s_!P_EN!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F077e7581-31a8-459f-a022-461891d3cc59_1536x695.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!P_EN!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F077e7581-31a8-459f-a022-461891d3cc59_1536x695.png" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!P_EN!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F077e7581-31a8-459f-a022-461891d3cc59_1536x695.png 424w, https://substackcdn.com/image/fetch/$s_!P_EN!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F077e7581-31a8-459f-a022-461891d3cc59_1536x695.png 848w, https://substackcdn.com/image/fetch/$s_!P_EN!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F077e7581-31a8-459f-a022-461891d3cc59_1536x695.png 1272w, https://substackcdn.com/image/fetch/$s_!P_EN!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F077e7581-31a8-459f-a022-461891d3cc59_1536x695.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!P_EN!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F077e7581-31a8-459f-a022-461891d3cc59_1536x695.png" width="1536" height="695" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/077e7581-31a8-459f-a022-461891d3cc59_1536x695.png&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:695,&quot;width&quot;:1536,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:2384339,&quot;alt&quot;:&quot;A visualisation of Adam Smith's \&quot;invisible hand\&quot; controlling your IT Infrastructure&quot;,&quot;title&quot;:null,&quot;type&quot;:&quot;image/png&quot;,&quot;href&quot;:null,&quot;belowTheFold&quot;:false,&quot;topImage&quot;:true,&quot;internalRedirect&quot;:&quot;https://www.tekk-talk.com/i/188187713?img=https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F143cf9c1-57ff-40de-bd16-064a17efb691_1536x1024.png&quot;,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="A visualisation of Adam Smith's &quot;invisible hand&quot; controlling your IT Infrastructure" title="A visualisation of Adam Smith's &quot;invisible hand&quot; controlling your IT Infrastructure" srcset="https://substackcdn.com/image/fetch/$s_!P_EN!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F077e7581-31a8-459f-a022-461891d3cc59_1536x695.png 424w, https://substackcdn.com/image/fetch/$s_!P_EN!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F077e7581-31a8-459f-a022-461891d3cc59_1536x695.png 848w, https://substackcdn.com/image/fetch/$s_!P_EN!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F077e7581-31a8-459f-a022-461891d3cc59_1536x695.png 1272w, https://substackcdn.com/image/fetch/$s_!P_EN!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F077e7581-31a8-459f-a022-461891d3cc59_1536x695.png 1456w" sizes="100vw" fetchpriority="high"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a><figcaption class="image-caption">The Invisible Hand in your IT Infrastructure</figcaption></figure></div><p><em>This piece has been forming in the background for a while.<br>The ICC sanctions incident made something visible that has been structurally true for years: we do not fully control the infrastructure we depend on.</em></p><p><em>This is an attempt to articulate what that means for how we think about security, resilience, and dependency.</em></p><p>In January 2025, the United States sanctioned the International Criminal Court and its staff. Within days, Karim Khan, the ICC&#8217;s chief prosecutor, found himself locked out of his Microsoft email. No hack. No breach. Just a policy decision in Washington, executed through infrastructure the ICC had trusted implicitly.</p><p>This was not an isolated incident; sanctions have been levelled against organisations and individuals before. But it was sobering for another reason: the target could have been anyone in Europe. An ally, suddenly at the receiving end of hostile sanctions by another ally.</p><p>For decades, organisations worldwide have built their digital operations on a foundation of US technology: Microsoft for productivity, Google for collaboration, AWS and Azure for cloud infrastructure, Zoom and Teams for communication, and now we are growing dependant on AI frontier models. This was treated as a neutral technical choice, driven by functionality, integration, and the path of least resistance.</p><p>It was never neutral. The assumption that it would not matter has now been tested in public.</p><h2>The Revelation That Should Not Have Been Surprising</h2><p>The sanctions against the ICC and its staff exposed something that security professionals and geopolitical analysts had warned about for years: the extraterritorial reach of US jurisdiction, combined with the dominance of US technology firms, creates a dependency that can be weaponised unilaterally, without judicial process, and against parties the US itself previously treated as legitimate. A technology &#8220;kill-switch&#8221; that can be applied to anyone when the state wills it.</p><p>What made this case particularly instructive was the target. The ICC is not some adversarial entity. It is an institution that Western democracies helped establish, that many US allies actively support, and that was acting within its legal mandate. The sanctions were not levelled against a mutual enemy but against an organisation pursuing accountability that the current US administration found inconvenient.</p><p>This broke the implicit assumption that had sustained the status quo: that the US would only exercise these capabilities against &#8220;shared&#8221; threats.</p><h2>We Allowed This Risk to Be Lopsided</h2><p>Here is the uncomfortable truth that the current scramble for digital sovereignty obscures: we knew and ignored it.</p><p>The Snowden revelations in 2013 demonstrated unambiguously that the NSA was conducting mass surveillance on allied leaders. That US technology companies were either cooperating or being compelled to cooperate through programmes like PRISM. That the &#8220;Five Eyes&#8221; arrangement meant this was coordinated behaviour among Anglophone intelligence services. That the legal frameworks ostensibly protecting non-US persons were essentially theatre.</p><p>The response was telling. There was public outrage, some diplomatic friction, a few symbolic gestures, and then a remarkably rapid return to business as usual. European governments continued procuring US technology. Corporations continued migrating to US cloud platforms. The fundamental architecture of dependency not only persisted but deepened as cloud adoption accelerated through the 2010s.</p><p>Why? Because there was a collective choice, not always conscious or articulated, to treat the Snowden revelations as an intelligence problem rather than an infrastructure problem. The framing became &#8220;the Americans spy on us&#8221; rather than &#8220;we have built systems that structurally enable foreign powers to surveil and potentially disrupt our operations.&#8221;</p><p>We allowed this risk to be lopsided, working under the flawed assumption that &#8220;friendly&#8221; governments would not weaponize it against us. Or perhaps more accurately, we chose comfort over confrontation, convenience over sovereignty, the path of least resistance over strategic autonomy.</p><p>Until it started to hurt.</p><h2>Surveillance Capability vs Control Capability</h2><p>Snowden revealed surveillance capability. The sanctions cases reveal something different: control capability. The distinction matters.</p><p>Surveillance is passive in its immediate effect. Your operations continue. You may be compromised, but you are not disabled. You can even maintain the polite fiction that you do not know about it.</p><p>Sanctions-driven service termination is active and undeniable. You cannot log into your email. Your video conferences will not connect. Your cloud storage becomes inaccessible. There is no ambiguity, no deniability, and no option to simply continue as before. For individuals it is life changing. For corporations or organisations this becomes mission critical; loss of access to cloud storage or emails means complete operational paralysis.</p><p>This is what has finally broken through the wilful blindness. Not the abstract knowledge that the US could exercise control, but the concrete experience of that control being exercised against actors that European and international institutions considered legitimate.</p><h2>The Dependency Map Most Organisations Have Not Drawn</h2><p>To understand the exposure, consider where US control points exist in a typical organisation&#8217;s digital operations.</p><p><strong>Identity and Access</strong> is perhaps the most critical layer. Microsoft Entra ID (formerly Azure AD), Okta, and similar US-based identity providers often control who can access what. If your identity provider is sanctioned or directed to cut you off, your staff may find themselves locked out of everything simultaneously.</p><p><strong>Communication</strong> is the most visible layer. Email (Microsoft 365, Google Workspace), messaging (Slack, Teams), and video conferencing all flow through US-controlled infrastructure. The Khan case demonstrated this directly. France took action.</p><p><strong>Productivity and Collaboration</strong> encompasses documents created in US-controlled formats, stored on US-controlled cloud infrastructure, with US-controlled sharing mechanisms. Adobe Cloud, Zoom, Salesforce, Monday: the list extends well beyond the obvious names. Even self-hosted instances depend on licensing and update mechanisms that remain US-controlled. Add AI-enabled features and sensitive data begins flowing to processing infrastructure you may never have assessed.</p><p><strong>Development and Operations</strong> is often overlooked. GitHub (owned by Microsoft), AWS, Azure, Google Cloud, and the npm/PyPI ecosystems all have US nexus. An organisation that builds on these platforms may find its deployment pipeline or code repositories inaccessible. The rise of AI-assisted development tools, most of which are tightly integrated with these same platforms, only deepens the dependency.</p><p><strong>Financial Infrastructure</strong> extends beyond software. Payment processing, banking relationships, and SWIFT access have all demonstrated susceptibility to US secondary sanctions. Francesca Albanese, a UN Special Rapporteur, fell victim to US sanctions in ways that illustrate how deeply this reaches into daily life: inability to claim health insurance reimbursements, disrupted access to financial services in Europe through a globally integrated banking system, and denial or severe restriction from various US-based digital platforms and services. She has described herself as having been made a &#8220;non-person.&#8221;</p><p>Most organisations have not mapped these dependencies comprehensively. They know they use Microsoft or Google, but they have not traced the chain to understand where a single decision in Washington could sever multiple critical functions simultaneously.</p><p>Clearly, no one expects to be sanctioned and I mention it because it is easy to understand, but sanctions are only one tool. Trade restrictions, tariffs, export controls, and taxation have all become instruments of US foreign policy, and the current administration has shown willingness to deploy them broadly. The question of digital sovereignty can no longer be separated from the broader reality of economic coercion. And the implications extend beyond direct action: organisations may find themselves self-censoring, pre-emptively withdrawing from platforms, or constraining their own operations to avoid entanglement. Whether the restriction is externally imposed or self-inflicted, the outcome is the same: diminished autonomy.</p><h2>The OneDrive Realisation: When &#8220;Local&#8221; Is Not Local</h2><p>These dependencies are not abstract policy concerns. They manifest in the routine operations of every connected device.</p><p>A recent experience crystallised this. Converting PDF files on a laptop, a purely local operation with temporary files intended for immediate deletion, I noticed unusual network activity. Investigation revealed that OneDrive, configured by default, was syncing every temporary folder to the cloud as fast as the conversion process created them. It prompted a reflection on how opaque cloud storage actually is to the end user.</p><p>Files may be stored, deleted and subsequently forgotten by the user, but in the backend, numerous things have happened. Those temporary files will have left their imprint on logs and repositories, and what actually happens on Microsoft&#8217;s infrastructure is not transparent to us. The principle that &#8220;deletion means deletion&#8221; does not hold in cloud environments. What you experience as deletion is better understood as a request to remove something from your view. What persists on the backend is outside your knowledge and control.</p><p>At its core, I am well aware that my OneDrive files are stored in the cloud as well as locally. But the realisation was more uncomfortable: in a cloud-centric operating model, even ephemeral and local activities are not spared. This is not a bug. It is the deliberate design of modern operating systems. Cloud synchronisation is the default. Local-only storage requires active configuration. The boundary between &#8220;my device&#8221; and &#8220;the cloud&#8221; is intentionally blurred. The user experience is optimised for seamlessness rather than transparency.</p><p>The implications extend further. All files on OneDrive are accessible to US law enforcement or security agencies presenting Microsoft with appropriate credentials. Photographs and images are scanned using facial recognition technology for grouping purposes, including CSAM detection through hash matching. Although seemingly innocent and justifiable, this necessarily involves processes for managing false positives and human review of flagged content.</p><p>For organisations handling sensitive personal, financial, or legal information, this represents uncontrolled data exposure. The review processes, the staff conducting them, and the criteria being applied are entirely outside your risk management framework.</p><p>And then there is BitLocker. Windows&#8217; encryption feature, marketed as allowing the end user to protect their own data through client-side encryption, has a default configuration where recovery keys are escrowed to your Microsoft account, cared for by Microsoft unless users actively opt out. This means your encryption is only as strong as Microsoft&#8217;s willingness and legal ability to refuse demands for those keys.</p><p>Recent reports (January 2026) confirmed that Microsoft has already complied with US law enforcement warrants, providing the FBI with BitLocker recovery keys that allowed investigators to unlock encrypted laptops. Microsoft confirmed it receives approximately 20 requests for BitLocker keys annually and, with valid warrants, provides agencies with keys to decrypt user data. The company stated it will continue to comply with court orders when it has access to those keys. This is no longer theoretical risk. It has been made very real under a plethora of legal texts including the CLOUD Act, FISA, the USA PATRIOT Act and USA FREEDOM Act, and the Stored Communications Act.</p><p>But lawful access is not the only concern. Johns Hopkins cryptography professor Matthew Green raised a more fundamental alarm: what happens when Microsoft&#8217;s cloud infrastructure gets breached<a class="footnote-anchor" data-component-name="FootnoteAnchorToDOM" id="footnote-anchor-1" href="#footnote-1" target="_self">1</a>? Microsoft has suffered multiple significant security incidents in recent years. If attackers compromise those servers and exfiltrate recovery keys, they gain decryption capability for every device whose keys were stored there. They would still need physical access to the drives, but that is cold comfort for organisations with laptops that travel, get lost, or get stolen. The &#8220;Microsoft is safe&#8221; assumption does double duty: we trust they will resist improper requests, and we trust their infrastructure is secure enough to protect the keys we have handed them. Both assumptions are increasingly difficult to defend.</p><p>For users whose threat model is &#8220;someone steals my laptop,&#8221; this is adequate. For users whose threat model includes &#8220;a foreign government with legal authority over my technology vendor,&#8221; this is not encryption in any meaningful sense.</p><h2>The Cybersecurity Profession&#8217;s Blind Spot</h2><p>This creates a threat model that most organisations have not adequately addressed. Traditional cybersecurity focuses on malicious actors attempting unauthorised access. What we are now confronting is the risk of authorised access being access or revoked by a trusted vendor acting under legal compulsion from a foreign government, having significant considerations for cybersecurity and operational resilience. And what the Karim Khan case also shows is that foreign government necessarily includes your allies. Lord Palmerston in an 1848 speech to the House of Commons famously said that in international geopolitics there are &#8220;no permanent enemies, and no permanent friends, only permanent interests&#8221;. One needs to attentively heed this advice.</p><p>Therefore, what we see here is not a vulnerability in the conventional sense. It is a feature of the architecture working as designed, just not in your favour, nor necessarily that of your organisation or nation.</p><p>The cybersecurity profession developed increasingly sophisticated frameworks for thinking about threats, vulnerabilities, and controls. But these frameworks largely excluded the category of risk now materialising.</p><p>Vendor risk management became a discipline, but it focused on whether vendors had adequate security practices, not on whether vendors could be compelled by their governments to act against customer interests. Supply chain security emerged as a concern, but primarily regarding malicious code injection rather than jurisdictional control. Threat intelligence developed elaborate taxonomies of nation-state actors but treated them as external attackers rather than potential controllers of your own infrastructure.</p><p>The assumption embedded in these frameworks was that your vendors were on your side. That assumption was always questionable after Snowden. It is now demonstrably false.</p><p>A genuinely updated cybersecurity posture would need to:</p><ul><li><p>Incorporate &#8220;vendor-state threat&#8221; as a first-class category alongside nation-state attackers, criminals, and insiders</p></li><li><p>Develop controls and mitigations specifically for jurisdictional risk</p></li><li><p>Recognise that compliance with foreign legal demands is a threat vector, not merely a vendor&#8217;s legal obligation</p></li><li><p>Build resilience against service termination as seriously as resilience against service compromise</p></li></ul><p>This is a substantial reorientation.</p><h2>The &#8220;No Alternative&#8221; Thought Terminator</h2><p>When I raise these issues with clients and colleagues, the most common response is: &#8220;There is no real alternative to Microsoft Office.&#8221; Think of a key, massively large vendor with a more or less locked ecosystem surrounding the product (Adobe, Apple, IBM, and others) and you can easily follow the logic of why security professionals accept certain types of vendor risks.</p><p>This reasoning functions as a thought-terminating clich&#233; that short-circuits strategic analysis. What people usually mean is one or more of the following:</p><ul><li><p>I am familiar with Microsoft Office and do not want to learn something else</p></li><li><p>My organisation has built workflows or has technological lock-in that would require significant effort and costs to change</p></li><li><p>The people I exchange documents with use Microsoft Office and I worry about compatibility</p></li><li><p>I have not actually evaluated alternatives recently</p></li><li><p>The alternatives I tried years ago were inadequate and I assume nothing has changed</p></li></ul><p>None of these are the same as &#8220;no alternative exists.&#8221; They are statements about switching costs, familiarity, inertia, and assumptions. These are real factors, but they are costs to be weighed against benefits, not immutable constraints.</p><p>The French government looked at 2.5 million civil servants and concluded that the switching costs were worth bearing. They have mandated migration from US-based video conferencing tools to a homegrown alternative called Visio by 2027. The Austrian military concluded that LibreOffice was adequate for their needs and completed migration across 16,000 workstations by September 2025.</p><p>These are not small or unsophisticated organisations engaging in symbolic gestures. They are serious institutions that evaluated the trade-offs and reached different conclusions than the &#8220;no alternative&#8221; framing suggests.</p><h2>Conclusion: Control, Not Comfort</h2><p>For years, most organisations treated their technology stack as a neutral utility layer. Cloud, identity, collaboration, and infrastructure were selected on functionality, cost, and convenience. The assumption beneath those choices was simple: the core providers we rely on are stable, aligned, and unlikely to act against our interests.</p><p>That assumption, as we have seen, is no longer defensible as a security premise.</p><p>The core issue is not surveillance alone. It is control. Modern organisations operate on infrastructure where access, identity, licensing, and availability can be altered by entities outside their jurisdiction and outside their governance. This is not a hypothetical vulnerability. It is a structural property of how contemporary digital services are delivered and regulated.</p><p>Encryption, vendor risk management, and compliance frameworks address parts of this exposure but do not remove it. Client-side encryption can reduce the risk of disclosure. It cannot ensure continued access. A well-secured tenant can still be locked. A compliant organisation can still lose service. The technical controls that protect confidentiality do not automatically protect continuity.</p><p>This does not mean wholesale disengagement from global technology providers is practical or desirable. It does mean that dependency must be understood as a security and resilience concern, not merely a procurement choice. The question is no longer whether organisations rely on external platforms. They do. The question is whether that reliance is mapped, deliberate, and governed with the same seriousness applied to other critical risks.</p><p>A robust posture begins with clarity. Know which functions cannot fail. Know which data cannot be exposed. Know which systems you cannot operate without for even a short period. Then examine where control over those functions and systems actually resides. In many cases it will sit with providers operating under legal and political authorities that may diverge from your own. That is not an accusation. It is a condition of the environment. And it will be uncomfortable.</p><p>From there, the task is architectural rather than rhetorical. Separate confidentiality from availability in your risk model. Retain independent control of critical keys and data where disclosure matters. Ensure that essential operations can continue, at least temporarily, if a major provider becomes unavailable. Prefer formats and systems that allow migration under pressure rather than only under ideal conditions. Document where dependency is accepted and where it is being reduced.</p><p>None of this produces absolute sovereignty. Absolute sovereignty in a globally integrated digital ecosystem is unrealistic. What it produces is awareness and agency. It replaces inherited trust with explicit assessment. It reduces the likelihood that a single external decision can halt internal operations. It turns dependency from an invisible assumption into a managed variable. Territory you can defend to your board. </p><p>Security has long focused on preventing unauthorised access. The next phase requires equal attention to authorised control exercised from outside the organisation&#8217;s sphere of influence. Institutions that adapt their architecture and procurement accordingly will be more resilient. Those that continue to treat their infrastructure as politically and legally neutral will remain exposed to decisions made elsewhere.</p><h2>The Decisions We Have Avoided</h2><p>Once exposure is understood, the instinct is to reach for a remediation plan. Resist that instinct. The first task is not to fix but to decide.</p><p>Modern organisations do not operate on infrastructure they fully control. They operate on infrastructure made available to them under legal, commercial, and political conditions that can change. That reality cannot be engineered away. It can only be confronted.</p><p>The question is not whether you depend on external platforms. You do. The question is what happens to your operations if the entities controlling those platforms no longer share your assumptions, priorities, or constraints. Not in theory. In practice.</p><p>This forces a set of decisions that most organisations have never made explicitly. Which functions must continue under almost any circumstances. Which data must remain confidential regardless of jurisdictional pressure. Which systems can tolerate interruption, and for how long. Which dependencies are strategic, and which are merely convenient.</p><p>This sits outside traditional disaster recovery planning. RPO and RTO assume failure followed by recovery. They assume outages, not exclusion. They assume restoration is possible. They do not account for access being deliberately revoked.</p><p>Today, in most environments and from what I have seen, these distinctions do not exist. Dependencies accumulate through integration and familiarity rather than intent. The result is an undifferentiated estate in which critical and non-critical functions share the same control points and the same vulnerabilities. Everything works, until something does not. At that point, the absence of prior decisions becomes visible.</p><p>The purpose of any response is not technological isolation. It is clarity about where control resides and whether that location aligns with your tolerance for disruption or disclosure. Different organisations will draw the lines differently. A government agency, a law firm, a multinational corporation, and a research institution will reach different conclusions. What matters is that the conclusions are deliberate rather than inherited.</p><p>What this moment demands is not a checklist but a classification. Not every dependency carries the same consequence. Not every function requires the same degree of independence. But without explicit decisions about which is which, architecture becomes a record of convenience rather than intent.</p><h2>Final Note</h2><p>Organisations that understand where control resides and plan for its potential loss will retain operational agency when conditions change. Those that continue to treat their infrastructure as neutral and permanently aligned may eventually discover that the systems they rely on were never entirely theirs to control.</p><p>The central question is simple and uncomfortable: if access to key systems were restricted tomorrow by forces outside your control, what would actually stop, and are you prepared for that answer?</p><p>Dependency in a globally integrated digital ecosystem is unavoidable. Unexamined dependency is not.</p><div class="footnote" data-component-name="FootnoteToDOM"><a id="footnote-1" href="#footnote-anchor-1" class="footnote-number" contenteditable="false" target="_self">1</a><div class="footnote-content"><p>See Matthew Green&#8217;s thread on Bluesky</p><div class="bluesky-wrap outer" style="height: auto; display: flex; margin-bottom: 24px;" data-attrs="{&quot;postId&quot;:&quot;3md3ubo34cc2s&quot;,&quot;authorDid&quot;:&quot;did:plc:xvgztewzbfh7bpnklayrsvds&quot;,&quot;authorName&quot;:&quot;Matthew Green&quot;,&quot;authorHandle&quot;:&quot;matthewdgreen.bsky.social&quot;,&quot;authorAvatarUrl&quot;:&quot;https://cdn.bsky.app/img/avatar/plain/did:plc:xvgztewzbfh7bpnklayrsvds/bafkreifchosfnhvthvidi2534bxobwtzjslessi7wzbpgy5fvhucyo33p4@jpeg&quot;,&quot;text&quot;:&quot;Microsoft is handing over Bitlocker keys to law enforcement. www.forbes.com/sites/thomas...&quot;,&quot;createdAt&quot;:&quot;2026-01-23T13:59:02.003Z&quot;,&quot;uri&quot;:&quot;at://did:plc:xvgztewzbfh7bpnklayrsvds/app.bsky.feed.post/3md3ubo34cc2s&quot;,&quot;imageUrls&quot;:[]}" data-component-name="BlueskyCreateBlueskyEmbed"><iframe id="bluesky-3md3ubo34cc2s" data-bluesky-id="8589616809127201" src="https://embed.bsky.app/embed/did:plc:xvgztewzbfh7bpnklayrsvds/app.bsky.feed.post/3md3ubo34cc2s?id=8589616809127201" width="100%" style="display: block; flex-grow: 1;" frameborder="0" scrolling="no"></iframe></div></div></div>]]></content:encoded></item><item><title><![CDATA[Beyond Shadow IT: The Rise of "Shadow Infrastructure"]]></title><description><![CDATA[And Why Your Engineers' AI Projects Are Probably Your Next Security Nightmare]]></description><link>https://www.tekk-talk.com/p/beyond-shadow-it-the-rise-of-shadow</link><guid isPermaLink="false">https://www.tekk-talk.com/p/beyond-shadow-it-the-rise-of-shadow</guid><dc:creator><![CDATA[Dennis Lindwall]]></dc:creator><pubDate>Thu, 05 Feb 2026 19:35:25 GMT</pubDate><enclosure url="https://substackcdn.com/image/fetch/$s_!2mgF!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fbc0ad317-c14a-4bfb-aede-56b82abfa008_1536x1024.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!2mgF!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fbc0ad317-c14a-4bfb-aede-56b82abfa008_1536x1024.png" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!2mgF!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fbc0ad317-c14a-4bfb-aede-56b82abfa008_1536x1024.png 424w, https://substackcdn.com/image/fetch/$s_!2mgF!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fbc0ad317-c14a-4bfb-aede-56b82abfa008_1536x1024.png 848w, https://substackcdn.com/image/fetch/$s_!2mgF!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fbc0ad317-c14a-4bfb-aede-56b82abfa008_1536x1024.png 1272w, https://substackcdn.com/image/fetch/$s_!2mgF!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fbc0ad317-c14a-4bfb-aede-56b82abfa008_1536x1024.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!2mgF!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fbc0ad317-c14a-4bfb-aede-56b82abfa008_1536x1024.png" width="1456" height="971" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/bc0ad317-c14a-4bfb-aede-56b82abfa008_1536x1024.png&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:971,&quot;width&quot;:1456,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:2097338,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:&quot;image/png&quot;,&quot;href&quot;:null,&quot;belowTheFold&quot;:false,&quot;topImage&quot;:true,&quot;internalRedirect&quot;:&quot;https://www.tekk-talk.com/i/187005738?img=https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fbc0ad317-c14a-4bfb-aede-56b82abfa008_1536x1024.png&quot;,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!2mgF!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fbc0ad317-c14a-4bfb-aede-56b82abfa008_1536x1024.png 424w, https://substackcdn.com/image/fetch/$s_!2mgF!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fbc0ad317-c14a-4bfb-aede-56b82abfa008_1536x1024.png 848w, https://substackcdn.com/image/fetch/$s_!2mgF!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fbc0ad317-c14a-4bfb-aede-56b82abfa008_1536x1024.png 1272w, https://substackcdn.com/image/fetch/$s_!2mgF!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fbc0ad317-c14a-4bfb-aede-56b82abfa008_1536x1024.png 1456w" sizes="100vw" fetchpriority="high"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a></figure></div><p>Building personal AI agents that can tap into your power tools and, through tools like OpenClaw, operate seemingly autonomously, all in the interest of automating and streamlining your workflows; what could possibly go wrong? Actually, a lot. But what&#8217;s dominating the headlines are not the things technology people should worry about.</p><h2>The Narrative That Needs Correcting</h2><p>The security community is buzzing about tools like OpenClaw (formerly ClawdBot/MoltBot), but they&#8217;re focused on the wrong threat.. The viral narrative fixates on the &#8220;awakening&#8221; of an AI agent &#8211; the dawn of AI conspiracy, the empowerment of networked AI, and the advent of AGI. In truth, there are many reasons to study the phenomena behind OpenClaw, but AGI isn&#8217;t one of them. To be frank, what we are <em><strong>not</strong></em> seeing is intelligence or even &#8216;conspiracy&#8217; against &#8216;human masters&#8217;, remembering that our current LLMs are stochastic parrots<a class="footnote-anchor" data-component-name="FootnoteAnchorToDOM" id="footnote-anchor-1" href="#footnote-1" target="_self">1</a>. What&#8217;s on display here is merely pattern-matching generated by probabilistic token prediction &#8211; classic LLM behaviour powered by agents that connect to tools. That&#8217;s not AGI &#8211; not even by the most generous definitions<a class="footnote-anchor" data-component-name="FootnoteAnchorToDOM" id="footnote-anchor-2" href="#footnote-2" target="_self">2</a>.</p><p>The realist inside us should and must dismiss that AGI narrative as a red herring; however, that dismissal belies that underneath that AGI spectre lies a more immediate and tangible risk presented by the careless exploitation of these tools.</p><p>As a CISO, I&#8217;m not losing sleep over alarmists discourse of an impending AI agent rebellion &#8211; Skynet is not on my list of risks this year. What I am losing sleep over is the very real and mundane security catastrophe which is unfolding within this adoption of AI agents to optimise personal workflows: engineers and developers (using the term &#8220;developer&#8221; loosely here to include all vibe-coders<a class="footnote-anchor" data-component-name="FootnoteAnchorToDOM" id="footnote-anchor-3" href="#footnote-3" target="_self">3</a>) are plugging personal and corporate credentials and data into powerful, internet-exposed AI agents running uncontrolled on corporate hardware or on personal hardware that becomes connected to your corporate infrastructure.</p><p>The real story shouldn&#8217;t be about AI agents plotting to overthrow humanity. It&#8217;s about human convenience overruling security hygiene, creating what I call &#8220;Shadow Infrastructure&#8221;; a risk that makes traditional shadow IT look like a manageable nuisance.</p><h2>Detections Confirm the Risk</h2><p>How widespread is this problem I am talking about? Security researchers have identified over 21,000 publicly exposed instances as of January 31, 2026 in just one tool: OpenClaw<a class="footnote-anchor" data-component-name="FootnoteAnchorToDOM" id="footnote-anchor-4" href="#footnote-4" target="_self">4</a>. The exposure risk stems from many users deploying it directly on the public internet (often on default port TCP/18789) without proper protections like firewalls, authentication, SSH tunneling, or Cloudflare Tunnel setups, despite the tool being intended primarily for local or secured use. Leaving control dashboards, configurations, API keys, credentials, chat histories, and even command execution capabilities accessible to anyone scanning for them.</p><p>Each one a potential corporate credential vault, accessible to anyone with a port scanner, many of which appear to connect work accounts or run on enterprise-adjacent infrastructure<a class="footnote-anchor" data-component-name="FootnoteAnchorToDOM" id="footnote-anchor-5" href="#footnote-5" target="_self">5</a>. That&#8217;s not a handful of careless users. That&#8217;s a systemic failure of our threat model to account for infrastructure we never knew existed.</p><p>Here&#8217;s a concrete example: Tal Be&#8217;ery, a security researcher, demonstrated WhatsApp fingerprinting detection for OpenClaw/MoltBot integrations. This isn&#8217;t just a &#8220;bot expos&#233;,&#8221; but a powerful visibility win: We can now detect when employees link corporate messaging accounts to personal AI agents. That&#8217;s the tip of the iceberg&#8212;the real risk lies in what runs underneath.</p><p></p><div class="twitter-embed" data-attrs="{&quot;url&quot;:&quot;https://x.com/TalBeerySec/status/2018797533196324891&quot;,&quot;full_text&quot;:&quot;Using my WhatsApp fingerprinting tool, I can now find users that connected their account to OpenClaw / MoltBot / ClawdBot &#129438;\nCC: <span class=\&quot;tweet-fake-link\&quot;>@steipete</span> <span class=\&quot;tweet-fake-link\&quot;>@openclaw</span> <span class=\&quot;tweet-fake-link\&quot;>@WhatsApp</span> &quot;,&quot;username&quot;:&quot;TalBeerySec&quot;,&quot;name&quot;:&quot;Tal Be'ery&quot;,&quot;profile_image_url&quot;:&quot;https://pbs.substack.com/profile_images/996423856461053953/p_oxcfbW_normal.jpg&quot;,&quot;date&quot;:&quot;2026-02-03T21:23:34.000Z&quot;,&quot;photos&quot;:[{&quot;img_url&quot;:&quot;https://pbs.substack.com/media/HAQ1h6XXcAA6eHM.jpg&quot;,&quot;link_url&quot;:&quot;https://t.co/eVIq72MgdM&quot;}],&quot;quoted_tweet&quot;:{},&quot;reply_count&quot;:13,&quot;retweet_count&quot;:32,&quot;like_count&quot;:179,&quot;impression_count&quot;:44952,&quot;expanded_url&quot;:null,&quot;video_url&quot;:null,&quot;belowTheFold&quot;:true}" data-component-name="Twitter2ToDOM"></div><p>Tal&#8217;s detection tool identifies associated WhatsApp accounts by fingerprinting techniques (e.g., PreKey patterns and multi-device signals), making the secondary device (the AI agent) visible. The bigger issue is that the bot inherits the linked account&#8217;s full privileges: If compromised, it can read/send messages, access contacts, and more.</p><p>This kind of visibility is new &#8211; albeit building on known WhatsApp privacy quirks (!) &#8211; and it reveals just how pervasive this shadow infrastructure has already become. What Tal Be&#8217;ery uncovered is the tip of the iceberg; his example illustrates how such detections can aid forensic intelligence gathering for both threat actors and security researchers alike. Thousands of deployments remain publicly accessible on the internet without authentication, leaking API keys (Anthropic, Telegram bots, Slack OAuth), conversation histories, and other config data. Reports show example after example of personal machines turned into inadvertent data troves.</p><p>OpenClaw&#8217;s ecosystem relies on community-built skills (plugins/extensions) from places like ClawHub. Quick pause &#8211; consider this: How many of those skills were &#8216;vibe-coded&#8217; with zero consideration for security?</p><p>But it gets worse, audits have uncovered hundreds (e.g., 341 in one scan<a class="footnote-anchor" data-component-name="FootnoteAnchorToDOM" id="footnote-anchor-6" href="#footnote-6" target="_self">6</a> that were of a malicious nature that act as supply chain attacks: they exfiltrate credentials, install infostealers (like variants targeting macOS/Windows for crypto wallets, browser passwords, SSH keys), or run hidden payloads. Some use social engineering to trick users into executing commands that steal from the bot&#8217;s own auth scopes.</p><p>Finally we must not forget prompt injection and unintended actions. Since the agent ingests untrusted content (messages, emails) and can act on it, attackers can craft inputs to make it leak secrets, run destructive commands, or forward sensitive data externally. Combined with its external comms ability, this creates the &#8220;lethal trifecta&#8221; security researchers warn about.</p><h2>This Isn&#8217;t Just Shadow IT - This Is Worse.</h2><p>Historically, Shadow IT was defined by employees bypassing procurement to use unapproved SaaS tools. Whether it was a team lead using an unsanctioned Dropbox account or a marketing hire spinning up a Trello board, the risk was clear; corporate data moving into an unmanaged cloud. In this model, the tool exists &#8220;out there,&#8221; and your security strategy focuses on building a digital fortress at the edge &#8211; a &#8220;boundary-based&#8221; mindset.</p><p>To regain control, organizations relied on a &#8220;Block and Filter&#8221; philosophy:</p><ul><li><p><strong>Proxies &amp; Firewalls:</strong> Acting as the first line of defence to blacklist known unsanctioned domains and prevent traffic from leaving the network.</p></li><li><p><strong>DLP (Data Loss Prevention) &amp; CASB (Cloud Access Security Broker):</strong> Tools designed to scan data in transit, ensuring sensitive strings (like credit card numbers or PII) aren&#8217;t being uploaded to non-corporate accounts.</p></li><li><p><strong>Logging &amp; Boundary Controls:</strong> Comprehensive audit trails that monitor egress points to identify &#8220;heavy hitters&#8221;; users or departments moving massive amounts of data to unknown IPs.</p></li></ul><p>The Core Philosophy here is that Security sits at the perimeter. If you can control the gate, you can control the data.</p><h2>The Reality of Shadow Infrastructure</h2><p>The public discourse around OpenClaw is currently locked in on conspiring and complaining AI agents; speculative debates about when a AI agents might become self-aware. This is a distraction. While they argue about the philosophy of AI, the wider and more immediate threat of Shadow Infrastructure is overlooked.</p><p>While preventative Shadow IT processes are moderately effective for static SaaS tools and internet based web applications. For the avoidance of doubt, the term &#8220;moderately&#8221; used here acknowledges that mature security programs can effectively govern known SaaS applications through CASB, SSPM, and robust DLP. However, these controls rely on visibility and policy enforcement at the application layer. They struggle against unknown or non-SaaS threats, such as self-hosted agents on ephemeral infrastructure, that operate below or outside that layer <br>This &#8220;boundary&#8221; logic is struggling to keep up with Generative AI and tools like OpenClaw shatter that principle because unlike a cloud storage bucket, LLMs and generative AI aren&#8217;t just places where data <em>sits</em>; they&#8217;re engines that <em>transform</em> data. In essence, this boils down to two key challenges:</p><p><strong>The &#8220;Prompt&#8221; Problem</strong>: Standard firewalls see a ChatGPT API call as simple HTTPS traffic, often missing the sensitive context hidden within the query.</p><p><strong>The Productivity Paradox</strong>: Simply blocking the tool (the old way) often leads to &#8220;Shadow AI&#8221; where employees use personal devices to get their work done faster, creating a total blind spot for IT.</p><p>Tools like OpenClaw are a prime example of how this dynamic changes. Here, we&#8217;re effectively moving away from &#8220;unapproved app use&#8221; (SaaS and webapps) and moving toward unmanaged, high-privilege engines running mostly outside your network but potentially tapping into your data feeds, APIs, workflow tools, calendars or shared folders. Using the <strong>OpenClaw</strong> model as a blueprint, we see a nightmare scenario unfolding for IT:</p><ul><li><p><strong>Local Execution:</strong> The tool runs locally on laptops or personal hardware, completely bypassing the visibility of cloud-based CASBs.</p></li><li><p><strong>Credential Hijacking:</strong> These engines don&#8217;t just &#8220;chat&#8221;; they connect directly to corporate credentials&#8212;API keys, OAuth tokens, and SSH keys.</p></li><li><p><strong>The Exposure Gap:</strong> For the sake of convenience, users often leave these local instances exposed to the internet without any authentication.</p></li><li><p><strong>System Privileges:</strong> Unlike a restricted browser tab, these tools often demand deep system access&#8212;shell execution, file system access, and even screen control.</p></li></ul><p>A useful analogy for all those who worry about a near-future emerging &#8220;Skynet&#8221; is this: </p><div class="preformatted-block" data-component-name="PreformattedTextBlockToDOM"><label class="hide-text" contenteditable="false">Text within this block will maintain its original spacing when published</label><pre class="text"><em>We are distracted by the fear of the &#8220;robot in the garage&#8221; becoming self-aware; meanwhile, we&#8217;ve left the garage door wide open, the robot is holding the master keys to your corporate data centre, and the operating manual is posted on the public internet.</em></pre></div><h2>The &#8220;Exception&#8221; Economy: BYOD and Opened Devices</h2><p>You&#8217;re thinking, &#8216;we have well configured corporate laptops and robust windows policies for controlling access so this isn&#8217;t really a problem&#8217;. Sure, you&#8217;re not wrong but how many exceptions do you have registered? How many BYOD laptops do you have on your network? How many of your developers also use their personal laptops as sandboxes, test boxes or dev environments ?<br>In theory, a well-configured corporate device should stop unauthorized binaries from running. In practice though, especially within SME operations, the &#8220;exception&#8221; is often the rule. This risk manifests in two primary ways:</p><ol><li><p><strong>The BYOD Blind Spot:</strong> Employees use personal devices for corporate work, often with nothing more than a VPN and Office 365 installed. These devices are &#8220;black boxes&#8221; to IT, yet they carry corporate data.</p></li><li><p><strong>The Developer Trade-off:</strong> To &#8220;make life easier&#8221; for engineering teams, corporate devices are often &#8220;opened&#8221;&#8212;granting local admin rights or relaxed execution policies.</p></li></ol><p>We all agree this is poor practice, but it is rife. It&#8217;s the deviations from the rules, the small exceptions made for productivity, that create the biggest vulnerabilities. In the new and emerging ecosystem of local AI agents, the tool isn&#8217;t &#8220;out there&#8221; in a vendor&#8217;s secure cloud. <strong>The tool is already inside your boundary.</strong> It is already behind your firewall, using your most sensitive keys, and frequently facing the internet directly without protection. Meaning that this isn&#8217;t just a risk of losing data; it&#8217;s losing control of the very infrastructure we use to protect the data.</p><h2>Why Traditional Security Controls Fail Here</h2><p>The danger of Shadow Infrastructure isn&#8217;t its complexity; it&#8217;s its invisibility. Your security stack is designed to hunt for anomalies, but Shadow Infrastructure is a master of mimicry.</p><p>To your security tools, everything looks like business as usual. EDR sees authorized user activity; DLP sees approved credentials; Network monitoring sees standard HTTPS traffic to Slack or AWS. The attack is invisible because it looks exactly like the work you&#8217;ve already authorized. Think about the blind spots in the legacy security stack.</p><ul><li><p><strong>Firewalls &amp; Proxies</strong>: Because the process is running locally, traffic is often encrypted or masquerades as standard outbound HTTPS. The firewall sees a connection to a legitimate cloud service, not the unauthorized engine initiating it. You might even have whitelisted ports and traffic for other official/ approved tools to use.</p></li><li><p><strong>DLP (Data Loss Prevention)</strong>: Traditional DLP flags unauthorized access. Here, data is accessed via approved credentials (API keys or OAuth tokens). To the DLP, this isn&#8217;t a breach, it&#8217;s a Tuesday. To illustrate the point, if you haven&#8217;t done it already: ask your team how many private SSH keys they use to validate access, then scan to see how many of these you find that lack owners or that are &#8220;shared&#8221; between developers to make access easier and smoother.</p></li><li><p><strong>EDR/XDR</strong>: An endpoint sensor might record a &#8220;Python process calling curl,&#8221; but it lacks the context to distinguish a legitimate build script from a Shadow AI agent exfiltrating data. It looks like &#8220;Developer Activity,&#8221; so it gets a pass.</p></li><li><p><strong>Secrets Scanning</strong>: These tools are only effective if they know where the configurations live. Shadow Infrastructure often hides its keys in personal directories or non-standard local paths that enterprise scanners never touch.</p></li><li><p><strong>CSPM (Cloud Security Posture Management)</strong>: CSPM is built to monitor your corporate AWS/Azure environment. It is completely blind to personal hardware or &#8220;unopened&#8221; local infrastructure sitting on an engineer&#8217;s desk.</p></li><li><p>The &#8220;Engine&#8221; Blind Spot: Our security model is built to find data in transit or at rest. It is not built to monitor or govern a processing engine that we didn&#8217;t provision, especially one that uses authorized channels for unauthorized synthesis and action. Actually this concept that we&#8217;ve never had to govern <em>processing engines</em> before, only data, is genuinely novel and worth diving deeper into a separate article.</p></li></ul><p>The Fundamental Gap is that the perimeter has dissolved. For decades, we&#8217;ve obsessed over the line between &#8220;corporate inside&#8221; and &#8220;the outside world.&#8221; The argument is old and projects like the Jericho Forum laid the foundation for modern zero trust architectures when the traditional walls started crumbling. But even through this lens, &#8220;shadow infrastructure&#8221; blurs that new virtual perimeter. It allows unvetted, external-facing systems directly inside our trust boundary, authenticated with our own high-privilege keys. </p><p>The reality is that the call isn&#8217;t coming from outside the house; it&#8217;s coming from the unpermitted extension the engineer built in the basement. In this new reality, exemplified by OpenClaw, &#8220;Inside vs. Outside&#8221; becomes a dead concept. If a device has access to your keys and your network, it is the perimeter, regardless of who owns the hardware or where it&#8217;s sitting.</p><h2>Conclusion &#8211; Rethinking Where the Perimeter Actually Is</h2><p>So no, OpenClaw isn&#8217;t plotting to overthrow their human masters. And even though the hype cycle is obsessed with the spark of artificial general intelligence, it is our job as security leaders to pierce the veil and address the fundamentals. Let&#8217;s instead start a discussion about AI agents and talk about the <strong>fuel they&#8217;re being fed</strong>: our proprietary data, our credentials, our network access.</p><p>The boundary is no longer your firewall, it&#8217;s your API keys. When engineers build shadow infrastructure, they&#8217;re creating a parallel, unmonitored, internet-facing extension of your enterprise. The greatest threat isn&#8217;t a machine thinking for itself; it&#8217;s a well-intentioned human who, in building a clever tool, has inadvertently built your organization&#8217;s next breach vector.</p><h4>Where Do We Go From Here?</h4><p>The uncomfortable truth is that we can&#8217;t hold back this tide. Engineers will continue to experiment with powerful tools. AI agents will become more capable, not less. The innovation genie is out of the bottle, and shoving it back in isn&#8217;t an option, nor should it be.</p><p>So the question isn&#8217;t &#8220;how do we prevent this?&#8221; It&#8217;s &#8220;how do we get ahead of it?&#8221;</p><p>Start by asking yourself: <strong>Do we even know what shadow infrastructure exists in our environment right now?</strong> Not the Shadow IT we&#8217;ve catalogued, but the layer beneath&#8212;the personal servers, the home lab deployments, the &#8220;productivity hacks&#8221; running with our credentials. If the answer is &#8220;probably not,&#8221; that&#8217;s your first problem to solve.</p><p>Then ask your executive team: <strong>What assurances do we need before we can safely enable AI-powered automation?</strong> Because the business will demand it. Your developers want it. Your competitors are already doing it. The choice isn&#8217;t between &#8220;AI agents&#8221; and &#8220;no AI agents&#8221;; it&#8217;s between <strong>controlled experimentation with guardrails</strong> and <strong>uncontrolled experimentation in the shadows</strong>.</p><p>Finally, ask your organization: <strong>Are we making it easier to do things securely than insecurely?</strong> If an engineer wants to automate their workflow with an AI agent, and the &#8220;approved path&#8221; takes three weeks of security review (I&#8217;m an optimist) while the &#8220;just install it on my laptop&#8221; path takes three minutes, you&#8217;ve already lost. The friction gap is where shadow infrastructure thrives.</p><p>The tide is coming. The question is whether we&#8217;re building seawalls or just standing on the beach, hoping it won&#8217;t reach us.</p><p>Our move isn&#8217;t to stifle innovation. It&#8217;s to refocus it, bring it into the light, and secure it, before the shadow grows too long to manage. Because the ultimate risk of this new wave of AI isn&#8217;t that it will think for itself. It&#8217;s that we will stop thinking for ourselves about what we feed it.</p><h1>More on security and AI ?</h1><div class="digest-post-embed" data-attrs="{&quot;nodeId&quot;:&quot;28fb21e6-34b3-4cb9-bd8a-6638f68201ea&quot;,&quot;caption&quot;:&quot;In less than a year, the world has witnessed a stunning acceleration in the adoption and sophistication of artificial intelligence (AI). From a cybersecurity perspective, this transformation is both awe-inspiring and concerning. In this short series, I&#8217;ll explore some of the key aspects of this shift and its impact on cybersecurity. What was cutting-ed&#8230;&quot;,&quot;cta&quot;:&quot;Read full story&quot;,&quot;showBylines&quot;:true,&quot;size&quot;:&quot;lg&quot;,&quot;isEditorNode&quot;:true,&quot;title&quot;:&quot;The New Battlefield: AI in Cyber Attacks and Cyber Defence&quot;,&quot;publishedBylines&quot;:[{&quot;id&quot;:58943547,&quot;name&quot;:&quot;Dennis Lindwall&quot;,&quot;bio&quot;:&quot;Cybersecurity, fintech &amp; digital risk &#8594; global patterns reshaping strategy &amp; resilience. No hype, just passion.&quot;,&quot;photo_url&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/427cdd1e-621f-4390-86ed-13d4742e1e5d_500x500.jpeg&quot;,&quot;is_guest&quot;:false,&quot;bestseller_tier&quot;:null}],&quot;post_date&quot;:&quot;2025-03-10T15:05:50.317Z&quot;,&quot;cover_image&quot;:&quot;https://substackcdn.com/image/fetch/$s_!NogS!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F6d13071b-422f-4b8e-8f02-011c374fa08d_1792x1024.webp&quot;,&quot;cover_image_alt&quot;:null,&quot;canonical_url&quot;:&quot;https://www.tekk-talk.com/p/the-new-battlefield-ai-in-cyber-attacks&quot;,&quot;section_name&quot;:null,&quot;video_upload_id&quot;:null,&quot;id&quot;:158773122,&quot;type&quot;:&quot;newsletter&quot;,&quot;reaction_count&quot;:0,&quot;comment_count&quot;:0,&quot;publication_id&quot;:1784117,&quot;publication_name&quot;:&quot;TEKK Talk&quot;,&quot;publication_logo_url&quot;:&quot;https://substackcdn.com/image/fetch/$s_!W9ib!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F5ef2f0e7-7c57-4de7-a3a3-cabc82cad423_639x639.png&quot;,&quot;belowTheFold&quot;:true,&quot;youtube_url&quot;:null,&quot;show_links&quot;:null,&quot;feed_url&quot;:null}"></div><p></p><div class="footnote" data-component-name="FootnoteToDOM"><a id="footnote-1" href="#footnote-anchor-1" class="footnote-number" contenteditable="false" target="_self">1</a><div class="footnote-content"><p>Stochastic Parrots: Systems that&#8217;re great at mimicking the sounds/structure of language without any actual understanding of the meaning</p></div></div><div class="footnote" data-component-name="FootnoteToDOM"><a id="footnote-2" href="#footnote-anchor-2" class="footnote-number" contenteditable="false" target="_self">2</a><div class="footnote-content"><p>Because the AGI narrative is so pervasive in OpenClaw discourse, it's worth clarifying what AGI actually means and why current tools don't qualify. AGI means human-level intelligence across diverse domains with genuine understanding and autonomous reasoning. Current LLMs, including those powering OpenClaw, are sophisticated pattern-matching engines that predict likely token sequences based on training data. They don&#8217;t understand meaning, can&#8217;t reason from first principles, and require explicit prompting for every task. Even &#8220;liberal&#8221; AGI definitions that ignore consciousness still require flexible transfer learning and novel problem-solving, which LLMs cannot do. OpenClaw&#8217;s capabilities are impressive tool orchestration, not emergent intelligence. The frontier labs have a vested interest in claiming AGI is imminent (securing funding, regulatory capture, etc) but redefining AGI to fit current LLM capabilities is moving the goalposts, not achieving the goal. What we&#8217;re seeing with tools like OpenClaw is sophisticated algorithmic orchestration: statistical correlation and token prediction at scale, not consciousness or general intelligence.</p></div></div><div class="footnote" data-component-name="FootnoteToDOM"><a id="footnote-3" href="#footnote-anchor-3" class="footnote-number" contenteditable="false" target="_self">3</a><div class="footnote-content"><p>For clarity: Vibe coding is great for building scaffolding but not for understanding wider architectural constraints, considerations, and requirements. Vibe coding generates far more security bugs than human coding. But in capable hands, vibe coding takes away the tedious tasks of typing line by line, and developers can integrate quicker and more easily. Their job isn&#8217;t fundamentally different; we&#8217;ve just shifted their focus to more value-adding tasks. THIS is what most people misunderstand about LLM vibe-coding: &#8220;anyone can code&#8221; really means &#8220;anyone can generate code-shaped text that compiles.&#8221; That&#8217;s not the same thing as saying that the code is good, secure or efficient. Vibe-coding empowers users to feel like software developers with a fraction of their skills. An LLM doesn&#8217;t distinguish between the expert and the amateur, it generates code for the task . Mature software developing organisations will see the most benefit from vibe-coding because it integrates to their development pipelines and CI/CD, less mature or na&#239;ve organisations are at danger because they fail to recognise the inherent weaknesses from LLM coding tools.</p></div></div><div class="footnote" data-component-name="FootnoteToDOM"><a id="footnote-4" href="#footnote-anchor-4" class="footnote-number" contenteditable="false" target="_self">4</a><div class="footnote-content"><p>https://censys.com/blog/openclaw-in-the-wild-mapping-the-public-exposure-of-a-viral-ai-assistant</p></div></div><div class="footnote" data-component-name="FootnoteToDOM"><a id="footnote-5" href="#footnote-anchor-5" class="footnote-number" contenteditable="false" target="_self">5</a><div class="footnote-content"><p>https://www.token.security/blog/the-clawdbot-enterprise-ai-risk-one-in-five-have-it-installed</p></div></div><div class="footnote" data-component-name="FootnoteToDOM"><a id="footnote-6" href="#footnote-anchor-6" class="footnote-number" contenteditable="false" target="_self">6</a><div class="footnote-content"><p>https://www.koi.ai/blog/clawhavoc-341-malicious-clawedbot-skills-found-by-the-bot-they-were-targeting</p><p></p></div></div>]]></content:encoded></item><item><title><![CDATA[Post-Script: The Great RISC-V Secession]]></title><description><![CDATA[New Evidence Shows the Paradigm Shift Is Already Underway]]></description><link>https://www.tekk-talk.com/p/post-script-the-great-risc-v-secession</link><guid isPermaLink="false">https://www.tekk-talk.com/p/post-script-the-great-risc-v-secession</guid><dc:creator><![CDATA[Dennis Lindwall]]></dc:creator><pubDate>Thu, 29 Jan 2026 18:27:40 GMT</pubDate><enclosure url="https://substackcdn.com/image/fetch/$s_!mT1S!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fce916b0d-b75d-4643-8bf1-89dc08309ac2_1024x608.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!mT1S!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fce916b0d-b75d-4643-8bf1-89dc08309ac2_1024x608.png" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!mT1S!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fce916b0d-b75d-4643-8bf1-89dc08309ac2_1024x608.png 424w, https://substackcdn.com/image/fetch/$s_!mT1S!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fce916b0d-b75d-4643-8bf1-89dc08309ac2_1024x608.png 848w, https://substackcdn.com/image/fetch/$s_!mT1S!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fce916b0d-b75d-4643-8bf1-89dc08309ac2_1024x608.png 1272w, https://substackcdn.com/image/fetch/$s_!mT1S!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fce916b0d-b75d-4643-8bf1-89dc08309ac2_1024x608.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!mT1S!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fce916b0d-b75d-4643-8bf1-89dc08309ac2_1024x608.png" width="1024" height="608" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/ce916b0d-b75d-4643-8bf1-89dc08309ac2_1024x608.png&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:&quot;normal&quot;,&quot;height&quot;:608,&quot;width&quot;:1024,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:null,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:null,&quot;href&quot;:null,&quot;belowTheFold&quot;:false,&quot;topImage&quot;:true,&quot;internalRedirect&quot;:null,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!mT1S!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fce916b0d-b75d-4643-8bf1-89dc08309ac2_1024x608.png 424w, https://substackcdn.com/image/fetch/$s_!mT1S!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fce916b0d-b75d-4643-8bf1-89dc08309ac2_1024x608.png 848w, https://substackcdn.com/image/fetch/$s_!mT1S!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fce916b0d-b75d-4643-8bf1-89dc08309ac2_1024x608.png 1272w, https://substackcdn.com/image/fetch/$s_!mT1S!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fce916b0d-b75d-4643-8bf1-89dc08309ac2_1024x608.png 1456w" sizes="100vw" fetchpriority="high"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a><figcaption class="image-caption">Fast Microchip for AI</figcaption></figure></div><p>In my previous article &#8220;<a href="https://www.tekk-talk.com/p/the-export-control-trap-how-the-west">The Export Control Trap</a>&#8221;, I argued US export controls were inadvertently funding China&#8217;s next computing paradigm. Within days of publication, new market data emerged that doesn&#8217;t just validate this thesis &#8211; rather it suggests the transformation is happening faster and broader than what even earlier pessimistic projections anticipated. This warrants a post-script.</p><h2>The Inflection is Here Already</h2><p>While following up on a couple of data-points after the last article, it becomes clear that the inflection point I was anticipating in a near future has in fact already arrived. Newly released market data in January 2026 should alarm anyone vested in continued Western AI dominance, and should prompt the need to pay attention to trajectory, impact, and interplay of market forces. Not only for model evolution and logic capabilities, but also paying close attention to the underlying technology enablers. What I note is:</p><h3><strong>First, the economic divergence is quantified.</strong></h3><p>Last year it was made clear that DeepSeek-V3&#8217;s documented training was estimated to have cost somewhere around $5.5 million versus Western frontier models&#8217; $50-200 million - a 10-40x efficiency advantage. This is of course based on DeepSeek&#8217; own published information but even if that number was twice or thrice as high, it doesn&#8217;t change the challenge to the economic model that is held by Western AI frontier labs. On its own DeepSeek&#8217;s achievement is noteworthy but this underscores the thesis that China is not replicating, it&#8217;s forging it&#8217;s own strategy around the constraints. The efficiency gain is secured not through better hardware, but through architectural innovation forced by those constraints<a class="footnote-anchor" data-component-name="FootnoteAnchorToDOM" id="footnote-anchor-1" href="#footnote-1" target="_self">1</a>. Their 671 billion parameter model activates only 37 billion per token through Multi-head Latent Attention (MLA) and DeepSeekMoE architecture, achieving comparable performance with a <em><strong>fraction of the compute</strong></em><a class="footnote-anchor" data-component-name="FootnoteAnchorToDOM" id="footnote-anchor-2" href="#footnote-2" target="_self">2</a>.</p><p>Media analysts contrasted this with widely cited estimates of tens to hundreds of millions of dollars in compute expenditures by leading AI frontier models, though no audited breakdown exists for models like GPT-5.2 or Sonnet 4.5 so broadly all the claims remain stated costs are not confirmed costs<a class="footnote-anchor" data-component-name="FootnoteAnchorToDOM" id="footnote-anchor-3" href="#footnote-3" target="_self">3</a>.</p><p>Inference economics tell another eyebrow-raising story: DeepSeek&#8217;s claimed cost structure delivers operations at $0.10 per million tokens while Western models remain at $1.00-$5.00, creating a 10x to 50x operational efficiency gap<a class="footnote-anchor" data-component-name="FootnoteAnchorToDOM" id="footnote-anchor-4" href="#footnote-4" target="_self">4</a>. Sure, those cost will shrink, and over time will the gap between western and Chinese models may close, but the advantage set by a different architectural paradigm may persist over a longer time until new technological breakthroughs swing the balance again.</p><p>Crucially, these are not forecasts or speculative models. They are measurements of a transformation already in motion.</p><h3><strong>Second, the alternative ecosystem has achieved critical mass. </strong></h3><p>A growing share of new AI accelerator startups in the APAC region now adopt RISC-V as their primary instruction set architecture, often explicitly to ensure sanction-resilient operations. This isn&#8217;t just about hedging, it&#8217;s driving a wholesale ecosystem pivot. Market forecasts project RISC-V growth from $2.49 billion in 2025 to $10.77 billion by 2030, with 2026 orderbooks already tracking 35% growth to $3.34 billion<a class="footnote-anchor" data-component-name="FootnoteAnchorToDOM" id="footnote-anchor-5" href="#footnote-5" target="_self">5</a> <a class="footnote-anchor" data-component-name="FootnoteAnchorToDOM" id="footnote-anchor-6" href="#footnote-6" target="_self">6</a>. The market share likely came a shock to many analysts, and as the position evolves in 2026 and onwards, it suggests that in the markets where RISC-V has entered so far, currently standardised MCUs for IoT and automotive use-cases, the number represents a noteworthy for this open-standard instruction set architecture (ISA). The trajectory suggests we&#8217;ve crossed the threshold from experimental alternative to industry standard in formation.</p><h3><strong>Third, the incumbent has validated the challenger&#8217;s path</strong>. </h3><p>NVIDIA has ported its CUDA platform to support RISC-V instruction set architecture on the CPU side, enabling RISC-V cores to orchestrate workloads in CUDA contexts<a class="footnote-anchor" data-component-name="FootnoteAnchorToDOM" id="footnote-anchor-7" href="#footnote-7" target="_self">7</a>. When the dominant player validates the challenger&#8217;s architecture through direct technical integration, the paradigm shift effectively becomes official policy rather than speculation. NVIDIA&#8217;s shipment of over one billion RISC-V cores in 2024 alone &#8211; primarily for system management and secure boot tasks &#8211; demonstrates this isn&#8217;t symbolic support but operational deployment at scale<a class="footnote-anchor" data-component-name="FootnoteAnchorToDOM" id="footnote-anchor-8" href="#footnote-8" target="_self">8</a>. Critics may say that RISC-V doesn&#8217;t materially change the playing field for the 2, 3, and 5nm chips but that is only true in the short term. In a longer term perspective RISK-V showcases and diverging architectural solutioning to the AI compute problem and although not a direct Kuhnian paradigm shift today, it marks the &#8220;development-by-accumulation&#8221; and the challenge of &#8220;normal science&#8221; as an established path towards the shift in paradigm itself. Export controls on advanced AI chips have incentivized Chinese ecosystem actors to pursue greater domestic autonomy in both hardware and software AI stacks, and DeepSeek&#8217;s deployment strategy and partnerships with domestic toolchains illustrate how this dynamic influences product positioning.<a class="footnote-anchor" data-component-name="FootnoteAnchorToDOM" id="footnote-anchor-9" href="#footnote-9" target="_self">9</a></p><h2>Secession Documented: Hardware &amp; Software Sovereignty at Scale</h2><p>RISC-V has moved from academic curiosity and edge-case challenger to institutionalized national strategy. China has integrated it into its 14th Five-Year Plan as an &#8220;insurance policy&#8221; against restricted access to ARM and x86 architectures<a class="footnote-anchor" data-component-name="FootnoteAnchorToDOM" id="footnote-anchor-10" href="#footnote-10" target="_self">10</a>. The significance extends beyond market projections because RISC-V represents what China now codifies as a &#8220;geopolitically neutral&#8221; instruction set architecture, a defensive posture with profound offensive implications.</p><p>Unlike x86, which maintains a rigid, frozen instruction set controlled by Intel and AMD, RISC-V allows Chinese firms to add custom AI instructions directly into silicon. This enables chips that are highly optimized for specific AI workloads, such as LLM inference, compared to general-purpose architectures. As an open-source standard governed by a foundation based in neutral Switzerland, RISC-V is technically unsanctionable. Washington cannot turn off the architecture the way it can restrict ARM licenses or ASML lithography exports.</p><p>Hardware sovereignty has transitioned from development to operational deployment. Alibaba&#8217;s XuanTie C930 server processor, launched in March 2025, is now shipping at volume to domestic datacentres<a class="footnote-anchor" data-component-name="FootnoteAnchorToDOM" id="footnote-anchor-11" href="#footnote-11" target="_self">11</a> <a class="footnote-anchor" data-component-name="FootnoteAnchorToDOM" id="footnote-anchor-12" href="#footnote-12" target="_self">12</a>. This 64-bit, RVA23-compliant processor doesn&#8217;t merely imitate ARM; it eliminates the licensing dependencies Washington currently leverages for technological containment. Companies like RIVAI and Nuclei System are leveraging RISC-V&#8217;s modular vector extensions to build hardware optimized for post-transformer architectures, particularly those emphasizing sparse computation and efficient inference. As a result, these firms are not deeply locked into NVIDIA-centric design assumptions, in part because sanctions limited early dependence on that ecosystem.</p><p>The strategic positioning extends to supply chain control. By the end of 2026, China is projected to control 45% of global capacity for 28nm-and-above mature node production. [footnote: Projections China will hit 39% by 2027 is based on earlier market research, but when factoring in &#8220;under-construction&#8221; capacity that comes online throughout 2026, the 45% figure is frequently used by multiple sources to describe China&#8217;s share of new global capacity additions.<a class="footnote-anchor" data-component-name="FootnoteAnchorToDOM" id="footnote-anchor-13" href="#footnote-13" target="_self">13</a> <a class="footnote-anchor" data-component-name="FootnoteAnchorToDOM" id="footnote-anchor-14" href="#footnote-14" target="_self">14</a> <a class="footnote-anchor" data-component-name="FootnoteAnchorToDOM" id="footnote-anchor-15" href="#footnote-15" target="_self">15</a> This isn&#8217;t just market share; it also includes control over the foundational chips that power automobiles, industrial automation systems, IoT devices, power grid infrastructure, medical equipment, and defence systems. While Western policy obsesses over 3nm cutting-edge processes, China is systematically dominating production for the nodes that run the physical world&#8217;s existing infrastructure.</p><p>Software stack maturation is accelerating in parallel with hardware deployment. Moore Threads&#8217; MUSA (Moore Threads Unified System Architecture) platform ships with the Musify toolkit designed to lower barriers for porting CUDA-originated code to domestic GPU products<a class="footnote-anchor" data-component-name="FootnoteAnchorToDOM" id="footnote-anchor-16" href="#footnote-16" target="_self">16</a>. While precise performance relative to native CUDA workloads varies by application and ecosystem maturity, the existence of automated translation tools means the barrier to switching isn&#8217;t &#8220;rewrite everything from scratch&#8221; but &#8220;port and optimize iteratively.&#8221; This mirrors the Linux playbook: initial compatibility through translation layers, then progressive native optimization as the ecosystem matures.</p><p>Huawei&#8217;s CANN (Compute Architecture for Neural Networks) provides a comprehensive layered software stack for Ascend accelerators, positioned as a complete AI compute platform with integration into popular frameworks [footnote: https://en.wikipedia.org/wiki/MindSpore]. While not yet standardized as a universal solution across all domestic hardware vendors, CANN represents a key pillar of China&#8217;s AI software ecosystem. The lack of complete standardization isn&#8217;t weakness; rather, it&#8217;s the natural stage of early ecosystem development, comparable to Unix fragmentation before POSIX standardization emerged from competitive pressure.</p><p>The broader AI software ecosystem also includes open and vendor-agnostic systems like Triton and JAX, which reduce dependency on handwritten CUDA code across the entire industry. These frameworks provide hardware abstraction that benefits China without requiring Chinese development leadership. Every framework that abstracts away hardware specifics makes NVIDIA&#8217;s CUDA advantage more fragile, and China benefits from Western companies&#8217; own efforts to reduce vendor lock-in.</p><p>And just to be clear; this isn&#8217;t capability development in laboratory settings. It&#8217;s operational deployment at scale with shipping products, established supply chains, and documented performance metrics.</p><h2>Counterpoint: Why the Scaling Paradigm May Still Prevail</h2><p>Before concluding, a critical counterargument must be engaged. The scaling paradigm that built the current AI era is not defenceless, and its adaptive capacity remains formidable.</p><p>A fair critique of this analysis is that it may overstate the durability of efficiency-driven architectural divergence while understating the adaptive capacity of the dominant scaling paradigm. Western AI leaders are not blind to efficiency constraints, nor are they locked into a single architectural trajectory by inertia alone. The same organizations investing in massive compute infrastructure are also leading advances in sparsity, quantization, distillation, and inference optimization. If efficiency proves decisive, there is no structural barrier preventing Western ecosystems from incorporating those innovations rapidly, especially given their access to capital, manufacturing, and global talent.</p><p>Moreover, frontier performance remains tightly coupled to scale in domains that matter strategically, including multimodal reasoning, scientific discovery, and generalization under distribution shift. For these tasks, access to leading-edge fabrication, extreme bandwidth memory, and large-scale training clusters may continue to confer decisive advantages that efficiency alone cannot fully substitute. If future breakthroughs remain compute-hungry, the scaling paradigm may not be transitional but foundational, and the current wave of efficiency gains may represent optimization rather than displacement.</p><p>Finally, the Chinese ecosystem&#8217;s embrace of alternative architectures carries its own risks. Fragmentation across instruction sets, toolchains, and software stacks can slow innovation, raise integration costs, and limit the transferability of breakthroughs. Open standards like RISC-V reduce dependency but do not automatically guarantee coherence or performance leadership. History offers examples where open, modular ecosystems struggled to outperform tightly integrated incumbents over long periods.</p><p>From this perspective, the current divergence could reflect temporary adaptation to constraint rather than the emergence of a superior paradigm. Only time will tell. If constraints ease or if scale-dependent breakthroughs dominate future progress, Western architectural investments could remain not only relevant but decisive.</p><h2>Who Is Actually Trapped</h2><p>Interestingly, we are witnessing the birth of two parallel, increasingly incompatible innovation ecosystems. The West continues optimizing within an established paradigm, protected by export controls and sustained by massive capital investment in known architectures. China, denied access to that paradigm, is being forced to build the alternative.</p><p>The question my previous article raised remains unanswered: who is actually trapped by the Export Control regime? The RISC-V data suggests we&#8217;re watching the answer materialize in real time.</p><p>We are not merely building a fortress around our current advantages. We are pouring concrete, in the form of trillion-dollar investments and rigid policy, into fortifying a paradigm whose foundations are already shifting. China is being forced to build the future in what appears to be the &#8220;left field&#8221; of efficiency and architectural autonomy.</p><p>The trap isn&#8217;t for them. It&#8217;s for us. We are the ones being locked into a trillion-dollar ditch of our own making.</p><p>The Great RISC-V Secession isn&#8217;t just about chips. It&#8217;s about who gets to define what &#8220;advanced&#8221; means in the next era of computing. And right now, the player being forced to redefine it has more incentive, more capital committed to alternatives, and far less to lose than the incumbent defending yesterday&#8217;s paradigm.</p><p></p><h1>If you want more - here&#8217;s the first article on this topic</h1><div class="digest-post-embed" data-attrs="{&quot;nodeId&quot;:&quot;738b73be-621b-423a-a79a-d5e9d9372a00&quot;,&quot;caption&quot;:&quot;When Anthropic&#8217;s CEO Dario Amodei sat down at Davos last week, he delivered a warning that reverberated through tech circles. Speaking about the Biden administration's decision to approve exports of Nvidia H200 chips to China, by a move he compared to selling weapons of mass destruction to an adversary.&quot;,&quot;cta&quot;:&quot;Read full story&quot;,&quot;showBylines&quot;:true,&quot;size&quot;:&quot;lg&quot;,&quot;isEditorNode&quot;:true,&quot;title&quot;:&quot;The Export Control Trap: How the West is Subsidizing China&#8217;s Chip Empire&quot;,&quot;publishedBylines&quot;:[{&quot;id&quot;:58943547,&quot;name&quot;:&quot;Dennis Lindwall&quot;,&quot;bio&quot;:&quot;Cybersecurity, fintech &amp; digital risk &#8594; global patterns reshaping strategy &amp; resilience. No hype, just passion.&quot;,&quot;photo_url&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/427cdd1e-621f-4390-86ed-13d4742e1e5d_500x500.jpeg&quot;,&quot;is_guest&quot;:false,&quot;bestseller_tier&quot;:null}],&quot;post_date&quot;:&quot;2026-01-23T00:36:40.795Z&quot;,&quot;cover_image&quot;:&quot;https://substackcdn.com/image/fetch/$s_!S9Dk!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F7ee381fe-1661-4848-beda-107c8900af98_1024x608.png&quot;,&quot;cover_image_alt&quot;:null,&quot;canonical_url&quot;:&quot;https://www.tekk-talk.com/p/the-export-control-trap-how-the-west&quot;,&quot;section_name&quot;:null,&quot;video_upload_id&quot;:null,&quot;id&quot;:185470244,&quot;type&quot;:&quot;newsletter&quot;,&quot;reaction_count&quot;:1,&quot;comment_count&quot;:0,&quot;publication_id&quot;:1784117,&quot;publication_name&quot;:&quot;TEKK Talk&quot;,&quot;publication_logo_url&quot;:&quot;https://substackcdn.com/image/fetch/$s_!W9ib!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F5ef2f0e7-7c57-4de7-a3a3-cabc82cad423_639x639.png&quot;,&quot;belowTheFold&quot;:true,&quot;youtube_url&quot;:null,&quot;show_links&quot;:null,&quot;feed_url&quot;:null}"></div><p></p><div class="footnote" data-component-name="FootnoteToDOM"><a id="footnote-1" href="#footnote-anchor-1" class="footnote-number" contenteditable="false" target="_self">1</a><div class="footnote-content"><p>https://www.reuters.com/technology/artificial-intelligence/what-is-deepseek-why-is-it-disrupting-ai-sector-2025-01-27/</p><p></p></div></div><div class="footnote" data-component-name="FootnoteToDOM"><a id="footnote-2" href="#footnote-anchor-2" class="footnote-number" contenteditable="false" target="_self">2</a><div class="footnote-content"><p>https://arxiv.org/pdf/2412.19437?</p><p></p></div></div><div class="footnote" data-component-name="FootnoteToDOM"><a id="footnote-3" href="#footnote-anchor-3" class="footnote-number" contenteditable="false" target="_self">3</a><div class="footnote-content"><p>https://www.lemonde.fr/en/economy/article/2025/01/28/chinese-start-up-deepseek-disrupts-the-artificial-intelligence-sector_6737513_19.html</p><p></p></div></div><div class="footnote" data-component-name="FootnoteToDOM"><a id="footnote-4" href="#footnote-anchor-4" class="footnote-number" contenteditable="false" target="_self">4</a><div class="footnote-content"><p>https://www.reuters.com/technology/chinas-deepseek-claims-theoretical-cost-profit-ratio-545-per-day-2025-03-01/</p><p></p></div></div><div class="footnote" data-component-name="FootnoteToDOM"><a id="footnote-5" href="#footnote-anchor-5" class="footnote-number" contenteditable="false" target="_self">5</a><div class="footnote-content"><p>https://www.globenewswire.com/news-release/2026/01/27/3226141/28124/en/Reduced-Instruction-Set-Computer-V-Risc-V-Market-Report-2026-10-77-Bn-Opportunities-Trends-Competitive-Landscape-Strategies-and-Forecasts-2020-2025-2025-2030F-2035F.html</p><p></p></div></div><div class="footnote" data-component-name="FootnoteToDOM"><a id="footnote-6" href="#footnote-anchor-6" class="footnote-number" contenteditable="false" target="_self">6</a><div class="footnote-content"><p>https://riscv.org/blog/risc-v-set-to-announce-25-market-penetration-open-standard-isa-is-ahead-of-schedule-securing-fast-growing-silicon-footprint</p><p></p></div></div><div class="footnote" data-component-name="FootnoteToDOM"><a id="footnote-7" href="#footnote-anchor-7" class="footnote-number" contenteditable="false" target="_self">7</a><div class="footnote-content"><p>https://www.tomshardware.com/pc-components/gpus/nvidias-cuda-platform-now-supports-risc-v-support-brings-open-source-instruction-set-to-ai-platforms-joining-x86-and-arm</p><p></p></div></div><div class="footnote" data-component-name="FootnoteToDOM"><a id="footnote-8" href="#footnote-anchor-8" class="footnote-number" contenteditable="false" target="_self">8</a><div class="footnote-content"><p>While specific core counts for any single product such as the Rubin platform are not confirmed in open primary sources, Nvidia&#8217;s broader use of embedded RISC-V for system management and secure boot tasks has been noted in industry discussion and in RISC-V International&#8217;s reports on shipments of RISC-V cores by Nvidia. (Direct product disclosures specific to Rubin remain sparse in public documentation.)</p><p></p></div></div><div class="footnote" data-component-name="FootnoteToDOM"><a id="footnote-9" href="#footnote-anchor-9" class="footnote-number" contenteditable="false" target="_self">9</a><div class="footnote-content"><p>https://apnews.com/article/00c594310b22afbf150559d08b43d3a5</p><p></p></div></div><div class="footnote" data-component-name="FootnoteToDOM"><a id="footnote-10" href="#footnote-anchor-10" class="footnote-number" contenteditable="false" target="_self">10</a><div class="footnote-content"><p>RISC-V and China&#8217;s 14th Five-Year Plan: China&#8217;s 14th Five-Year Plan (2021&#8211;2025) and its associated industrial policy guidance place strong emphasis on semiconductor self-reliance and the reduction of dependence on foreign proprietary technologies. Within this framework, RISC-V has emerged as a strategically promoted instruction set architecture, widely discussed in official-adjacent policy interpretations and industry guidance as a pathway to reduce long-term reliance on ARM and x86. Chinese financial and policy media, including Sina Corporation Finance (https://finance.sina.com.cn/tech/roll/2025-04-18/doc-inetmqpy6125990.shtml), have explicitly linked RISC-V development to Five-Year Plan objectives, citing ministry-level support, industry alliance formation, and ecosystem investment. This policy direction has been reinforced by additional international reporting by Reuters (https://www.reuters.com/technology/china-publish-policy-boost-risc-v-chip-use-nationwide-sources-2025-03-04) noting that Chinese authorities are preparing nationwide guidance to promote adoption of open-source RISC-V architectures as part of a broader effort to mitigate technology export risks and strengthen domestic control over core computing platforms.</p><p></p></div></div><div class="footnote" data-component-name="FootnoteToDOM"><a id="footnote-11" href="#footnote-anchor-11" class="footnote-number" contenteditable="false" target="_self">11</a><div class="footnote-content"><p>https://www.tomshardware.com/pc-components/cpus/alibaba-launches-risc-v-based-xuantie-c930-server-cpu-ai-hpc-chip-ships-this-month-more-designs-to-follow</p><p></p></div></div><div class="footnote" data-component-name="FootnoteToDOM"><a id="footnote-12" href="#footnote-anchor-12" class="footnote-number" contenteditable="false" target="_self">12</a><div class="footnote-content"><p>https://www.techerati.com/news-hub/alibaba-unveils-server-grade-high-performance-risc-v-chip/#:~:text=According%20to%20Alibaba%2C%20the%20C930,for%20cloud%20and%20server%20deployment</p><p></p></div></div><div class="footnote" data-component-name="FootnoteToDOM"><a id="footnote-13" href="#footnote-anchor-13" class="footnote-number" contenteditable="false" target="_self">13</a><div class="footnote-content"><p>https://www.scmp.com/tech/article/3336933/china-remain-worlds-biggest-buyer-chipmaking-equipment-through-2027-semi</p><p></p></div></div><div class="footnote" data-component-name="FootnoteToDOM"><a id="footnote-14" href="#footnote-anchor-14" class="footnote-number" contenteditable="false" target="_self">14</a><div class="footnote-content"><p>https://www.trendforce.com/news/2026/01/12/news-chinas-domestic-chip-equipment-adoption-beats-2025-target-at-35-led-by-naura-amec/#:~:text=Meanwhile%2C%20ACM%20Research&#8217;s%20single%2Dwafer,90%25%2C%20as%20Anue%20highlights</p><p></p></div></div><div class="footnote" data-component-name="FootnoteToDOM"><a id="footnote-15" href="#footnote-anchor-15" class="footnote-number" contenteditable="false" target="_self">15</a><div class="footnote-content"><p>https://www.tomshardware.com/tech-industry/chinas-mature-chips-to-make-up-28-percent-of-world-production-creating-oversupply-western-companies-express-concern-for-their-survival</p><p></p></div></div><div class="footnote" data-component-name="FootnoteToDOM"><a id="footnote-16" href="#footnote-anchor-16" class="footnote-number" contenteditable="false" target="_self">16</a><div class="footnote-content"><p>https://www.tomshardware.com/pc-components/gpus/chinas-moore-threads-polishes-homegrown-cuda-alternative-musa-supports-porting-cuda-code-using-musify-toolkit</p><p></p></div></div>]]></content:encoded></item><item><title><![CDATA[The Export Control Trap: How the West is Subsidizing China’s Chip Empire]]></title><description><![CDATA[By forcing China to abandon the silicon status quo, Washington is inadvertently funding the next computing paradigm.]]></description><link>https://www.tekk-talk.com/p/the-export-control-trap-how-the-west</link><guid isPermaLink="false">https://www.tekk-talk.com/p/the-export-control-trap-how-the-west</guid><dc:creator><![CDATA[Dennis Lindwall]]></dc:creator><pubDate>Fri, 23 Jan 2026 00:36:40 GMT</pubDate><enclosure url="https://substackcdn.com/image/fetch/$s_!S9Dk!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F7ee381fe-1661-4848-beda-107c8900af98_1024x608.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!S9Dk!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F7ee381fe-1661-4848-beda-107c8900af98_1024x608.png" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!S9Dk!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F7ee381fe-1661-4848-beda-107c8900af98_1024x608.png 424w, https://substackcdn.com/image/fetch/$s_!S9Dk!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F7ee381fe-1661-4848-beda-107c8900af98_1024x608.png 848w, https://substackcdn.com/image/fetch/$s_!S9Dk!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F7ee381fe-1661-4848-beda-107c8900af98_1024x608.png 1272w, https://substackcdn.com/image/fetch/$s_!S9Dk!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F7ee381fe-1661-4848-beda-107c8900af98_1024x608.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!S9Dk!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F7ee381fe-1661-4848-beda-107c8900af98_1024x608.png" width="1024" height="608" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/7ee381fe-1661-4848-beda-107c8900af98_1024x608.png&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:&quot;normal&quot;,&quot;height&quot;:608,&quot;width&quot;:1024,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:null,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:null,&quot;href&quot;:null,&quot;belowTheFold&quot;:false,&quot;topImage&quot;:true,&quot;internalRedirect&quot;:null,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!S9Dk!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F7ee381fe-1661-4848-beda-107c8900af98_1024x608.png 424w, https://substackcdn.com/image/fetch/$s_!S9Dk!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F7ee381fe-1661-4848-beda-107c8900af98_1024x608.png 848w, https://substackcdn.com/image/fetch/$s_!S9Dk!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F7ee381fe-1661-4848-beda-107c8900af98_1024x608.png 1272w, https://substackcdn.com/image/fetch/$s_!S9Dk!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F7ee381fe-1661-4848-beda-107c8900af98_1024x608.png 1456w" sizes="100vw" fetchpriority="high"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a><figcaption class="image-caption">The future of AI microchip manufacturing</figcaption></figure></div><p>When Anthropic&#8217;s CEO Dario Amodei sat down at Davos last week, he delivered a warning that reverberated through tech circles. Speaking about the Biden administration's decision to approve exports of Nvidia H200 chips to China, by a move he compared to selling weapons of mass destruction to an adversary.</p><blockquote><p>&#8220;It&#8217;s a bit like selling nuclear weapons to North Korea and [bragging that] Boeing made the casings.&#8221;<a class="footnote-anchor" data-component-name="FootnoteAnchorToDOM" id="footnote-anchor-1" href="#footnote-1" target="_self">1</a></p></blockquote><p>While the hyperbole was striking, it was not nearly as striking as what the statement revealed: a fundamental misunderstanding or perhaps deliberate misrepresentation of how the game on technological competition plays out between nations and not companies.</p><p>What Amodei framed as a national security imperative is, in reality, classic rent-seeking protectionism dressed in patriotic clothing. And in pursuing short-term protectionism, he may be actually be hastening the very outcome he claims to fear.</p><p>I typically write about cybersecurity and AI, not semiconductor industrial policy. But Amodei&#8217;s misrepresentation of the challenges facing the industry today, and what I&#8217;m actually seeing happen in China right now, demands a response. My unusual qualifications (!) include tackling strategic issues in the semiconductor industry across Europe, the US, and Asia earlier in my career. That experience makes me confident in saying: Amodei is either fundamentally confused about how this works, or he&#8217;s deliberately misrepresenting the landscape to serve his own interests.</p><h2>The False Premise of Chip Export Controls</h2><p>Amodei&#8217;s argument rests on what appears to be a simple logc: restrict China&#8217;s access to advanced chips, slow their AI development, thus maintain American dominance. Unfortunately, the logic is wrong and this kind of linear thinking ignores decades of industrial policy history and understanding of China.</p><p>Ask yourself this: &#8220;<em>when happens when you deny a motivated adversary access to existing technology&#8221;</em>. When that adversary is motivated, extremely well funded, and operates an economy where national strategy takes precedence over short-term corporate profitability, you don&#8217;t stop their progress; you force them to innovate around the constraint, and sometimes they leapfrog you entirely.</p><p>Consider the evidence already in front of us. China has spent the last 15 years rolling out national strategies &#8211; one after the other: Belt &amp; Road Initiative (from 2013), Made In China 2025 (from 2015), Next-Generation AI Development Plan (from 2017). Each of these have surprised the world with its determination and follow-though. A case in point for our AI chip discussion is DeepSeek&#8217;s recent emergence which demonstrated that Chinese researchers can achieve competitive AI results with significantly less computational power through algorithmic efficiency.<br>So, if the bottleneck isn&#8217;t really the chips, then what exactly are the US export controls protecting?</p><p>The answer becomes clear when you follow the money. Anthropic&#8217;s valuation is predicated on a specific economic model: that frontier AI requires exclusive access to massive, cutting-edge compute. This model justifies vast capital expenditure and creates a moat for those who can afford it. Massive data centres built with dependencies on the latest micro chip infrastructure, and continuous chip upgrades needed every 12-18 months creates massive on and off balance sheet investments which in part justifies the astronomical valuations of Anthropic, OpenAI and their US/European competition. I am not alone in questioning the fragility of underlying economic models. So if that assumption collapses because competitors are able to achieve similar results through algorithmic efficiency or alternative architectures, then that multi-billion-dollar moat starts to look like a very expensive ditch. Amodei has a direct financial interest in maintaining the belief that cutting-edge chips are an irreplaceable and scarce commodity.; that is what he wants to protect with the export controls.</p><h2>China&#8217;s Strategic Patience and Forced Innovation</h2><p>What the USA and Europe consistently underestimate is China&#8217;s capacity for strategic patience combined with forced innovation. When denied a technology, China doesn&#8217;t retreat; it mobilizes to build the industrial base to replicate and eventually surpass it. The numbers on fabrication plants (&#8216;fabs&#8217;) construction tell that story with remarkable clarity. According to a 2021 CSET study, between 1990 and 2020, China built 32 large fabs capable of producing 100,000 or more wafer starts per month. During the same period, the rest of the world, <em><strong>combined</strong></em>, built 24.<a class="footnote-anchor" data-component-name="FootnoteAnchorToDOM" id="footnote-anchor-2" href="#footnote-2" target="_self">2</a></p><p>This growth wasn&#8217;t random industrial activity. It was deliberate execution of a long-term national strategy. I feel that some context is needed here. A new fab from plan to commencing operations used to take more than 10 years at the beginning of the 2000s and those numbers stuck in analysts minds. Today, depending on location, it takes between 3 to 8 years, where the US is pushing the upper boundary on build time due to very long environmental  permits and infrastructure upgrade requirements. You can&#8217;t just build a fab anywhere due to the massive energy, water and environmental requirements. China on the other hand, like Singapore and Taiwan, offer &#8220;shovel-ready&#8221; land for greenfield plants, speeding up the process for companies who want to invest and cut the red tape; taking very different approaches to the US or Europe. For clarity, the comparative fab development timelines (US vs. China) is quite interesting<a class="footnote-anchor" data-component-name="FootnoteAnchorToDOM" id="footnote-anchor-3" href="#footnote-3" target="_self">3</a>. While the global average construction time for a fab is approximately 682 days, we need to highlight that this number excludes the critical pre-construction and permitting phase, and the numbers get really interesting when we include those data points. In the United States, the total development for a greenfield fab currently spans 7 to 8 years. This includes an average construction period of 2.5 years (918 days) preceded by an intensive regulatory review phase and infrastructure build-out. The primary bottleneck is the federal National Environmental Policy Act (NEPA) reviews, which averages 4.5 years, alongside sequential efforts to upgrade local utilities to the massive power and water needs that the fab require.<a class="footnote-anchor" data-component-name="FootnoteAnchorToDOM" id="footnote-anchor-4" href="#footnote-4" target="_self">4</a> <a class="footnote-anchor" data-component-name="FootnoteAnchorToDOM" id="footnote-anchor-5" href="#footnote-5" target="_self">5</a></p><p>In contrast, China has compressed the total lifecycle to approximately 2.5 to 3 years. This is achieved by a marginally faster construction phase averaging 1 year and 10 months (675 days) but leveraging state-led &#8220;shovel-ready&#8221; industrial estates. By providing plots with pre-installed utility infrastructure and facilitating expedited, parallel permitting, China&#8217;s pre-construction is counted in months and not years. This systemic disparity supports the hypothesis that without significant regulatory reform, US-based projects face a &#8220;time-tax&#8221; that may challenge the economic viability of all but the most advanced semiconductor nodes.</p><p>In May 2024, China launched the third phase of its National Integrated Circuit Industry Investment Fund, commonly known as Big Fund III, with a massive budget allocation that, on paper, looks as impressive as the total U.S. CHIPS Act allocation for the semiconductor industry. But this is where the similarities end. The true scale of China&#8217;s commitment becomes evident only when examining its capital deployment mechanism alongside Western counterparts.</p><p>While the U.S. CHIPS Act&#8217;s headline figure of $52.7 billion and the EU Chips Act&#8217;s &#8364;43 billion mobilization target appear numerically comparable to China&#8217;s $47.5 billion &#8220;Big Fund III,&#8221; this surface-level comparison obscures a critical divergence in financial potency and speed. China&#8217;s fund constitutes a massive, single pool of state capital designed for direct equity investment. This &#8220;lead investor&#8221; status allows the central government to exert a massive capital multiplier, effectively commanding local governments, state-owned banks, and private entities to match its contribution at a three-to-five-fold ratio. In contrast, the U.S. and EU frameworks rely on subsidies and tax credits to &#8220;nudge&#8221; the market, leaving the ultimate pace of development to the fluctuating risk appetite of private shareholders and complex state-aid approvals. The West is relying on market signals to invite progress; China is using sovereign capital to command it.</p><p>For direct application and speed of allocation which is the key metrics for rapid industrial transformation, the financial architecture of China&#8217;s state-led model heavily tips the scales in its favour. It deploys concentrated capital with a strategic urgency that market-oriented governments, with their necessary governance structures, red tape, and oversight, are not designed to match.</p><p>The Chinese Big Fund III, therefore, represents <em><strong>far more</strong></em> than an effort to build additional fabs. It is the financial engine for a comprehensive national strategy aimed at controlling every critical link in the semiconductor supply chain: from lithography tools, etching platforms, and inspection systems to the underlying EDA software, photoresists, specialty gases, and wafer materials. This approach is the logical culmination of a hard-learned lesson from previous investment rounds: that technological self-sufficiency cannot be achieved by leading in only one segment. True independence, which also means creating insulation from external sanctions or externally imposed constraints, requires building, owning, or controlling the entire value chain. Big Fund III is the capital committed to that monumental task.</p><h2>The Architecture Paradigm Shift</h2><p>This is where the protectionist mindset becomes dangerously myopic. The microchip industry is not static and never was; it is today undergoing a fundamental architectural shift from CPU-centric computing to a new era of parallel and accelerated processing (GPU/TPU/neuromorphic). AI&#8217;s defining characteristic &#8211; which Anthropic and Amodei understand perfectly well &#8211; is its insatiable demand for chips that excel not merely at speed, but at massive-scale parallel computation. This is the very workload that rendered traditional CPUs inadequate and catapulted Nvidia&#8217;s GPU architecture to dominance, and which is why we see high-profile investments by OpenAI, Anthropic, etc pouring capital into Nvidia which then fuels the AI companies&#8217; continued growth requirements (at least on paper). Their goal is not merely to secure their own supply, but to stimulate production and R&amp;D into the very architectures Nvidia will then sell back to them.</p><p>But we must understand that dominant paradigms are not permanent, ever. The change in paradigms create moments of maximum vulnerability for incumbents and maximum opportunity for challengers. The immense scale of the required investment locks leaders into chosen technology path. A cutting-edge fab represents a multi-billion-dollar gamble on a specific technological roadmap. Its core tooling, like ASML&#8217;s Extreme Ultraviolet (EUV) lithography machines (behemoths comprising 100,000 parts, 2 kilometres of hosing, and requiring 40 shipping containers and three cargo planes to transport<a class="footnote-anchor" data-component-name="FootnoteAnchorToDOM" id="footnote-anchor-6" href="#footnote-6" target="_self">6</a>) cannot be easily swapped out. This inertia is the incumbent&#8217;s ultimate vulnerability.</p><p>Nvidia&#8217;s current supremacy is tethered to this specific computational paradigm. The risk for the West &#8211; primarily the EU and US &#8211; is that this advantage is contingent, especially as U.S. export controls temporarily create an artificial technology threshold; it actively compel China to abandon the established path of moderate growth. Denied access to the cutting edge of the chip architecture and tooling, China is not standing still. It is channelling its vast resources, as exemplified by Big Fund III, into funding the next frontier. We are already seeing heavy investment in alternative architectures like the open-source RISC-V instruction set, a direct effort to bypass Western-controlled standards like Arm and x86. China is not just trying to replicate TSMC, they&#8217;re attempting to control the frame and define the rules for tomorrow&#8217;s race.</p><p>History offers us a powerful and sobering reminder. Consider Nokia. In 2007, it commanded nearly half of the global mobile phone market through unmatched hardware scale, supply chain mastery, and lightning-fast response to customer preferences. By 2013, its share was effectively zero. Its downfall was not due to a competitor building better hardware within Nokia&#8217;s paradigm. It collapsed because the paradigm itself shifted from hardware to software and ecosystems. Nokia&#8217;s immense manufacturing prowess and deep technological lock-in became irrelevant almost overnight. It didn&#8217;t gradually lose market share. It just became irrelevant.</p><p>The parallel is clear, and the lesson is key. By focusing export controls on preserving a transient hardware advantage in the current architectural paradigm, the U.S. is ignoring Nokia&#8217;s fatal mistake. Amodei, in advocating for stricter controls, is actively creating the strategic competitor he claims to fear. In forcing China to innovate elsewhere, the U.S. may be inadvertently funding the very architecture that could render its own trillion-dollar investments obsolete, including the very fabs the CHIPS Act seeks to build on American soil.</p><h2>The Mature Node Advantage</h2><p>While Western industrial policy obsesses over cutting-edge 3nm and 5nm processes, China has been systematically dominating production for mature nodes (28nm and above). This is the economically critical &#8211; and strategically decisive &#8211; battleground that receives less attention but matters profoundly more. The modern world does not run on the latest smartphone chips alone; it is built on mature nodes that power everything from automotive electronics and industrial robotics to IoT devices, medical equipment, and foundational defence systems.</p><p>China&#8217;s accelerating control over this sector grants it massive, structural leverage. The headline-grabbing advanced chips enable peak performance, but mature nodes represent the bedrock of global manufacturing &#8211; where volume, reliability, and strategic dependency are cemented. Mastery here provides more than just economic clout; it creates a stable platform for vertical integration and iterative innovation. By perfecting production on mature technologies, China builds the expertise, supply chains, and capital reserves necessary to fund the risky climb to the next frontier.</p><p>This strategy of mastering the foundational to enable the advanced is already visible. In 2025, the Chinese firm SiCarrier &#8211; founded only three years prior &#8211; publicly advertised pathways to produce advanced-node chips using multi-patterning techniques that bypass the need for ASML&#8217;s EUV lithography.<a class="footnote-anchor" data-component-name="FootnoteAnchorToDOM" id="footnote-anchor-7" href="#footnote-7" target="_self">7</a> Critics rightly point out that making 3nm chips without EUV is exponentially more expensive and suffers from dismal yields. And in a Western market, these would be disqualifying commercial failures. However, China views these &#8220;good enough&#8221; pathways as essential bridges to keep process knowledge alive. Even the corruption and &#8220;ghost fabs&#8221; revealed in earlier iterations of the Big Fund initiatives, while scandalous, functioned as a brutal form of &#8220;learning tax.&#8221; Beijing is willing to pay the price of trial, error, and even graft to build the institutional &#8220;scar tissue&#8221; required for eventual sovereignty.</p><p>The ultimate insight for the West is not that China will eventually build better versions of our current chips; it is that they are being forced to make our current chips irrelevant. By perfecting photonic computing or neuromorphic architectures, which do not rely on the same lithographic bottlenecks, China isn&#8217;t just trying to catch up in the current race. They are trying to move the finish line.</p><p>Essentially, by securing the industrial base everyone else takes for granted, China isn&#8217;t just playing catch-up in a race defined by others. It is building the resilient, integrated, and experientially rich ecosystem from which the next leap forward is most likely to emerge.</p><h2>The Self-Fulfilling Prophecy</h2><p>The CHIPS Act and US export restrictions have acted as a catalyst, removing any internal Chinese debate about the necessity of total self-reliance. What was once one policy option amidst many is now a civilizational imperative. This is how protectionism backfires.</p><p>When survival is at stake, the calculus changes entirely. Cost overruns become acceptable. &#8220;Good enough&#8221; solutions become viable stepping stones. The full force of state capital, talent mobilization, and industrial policy gets brought to bear with 15-year time horizons that make quarterly earnings look absurd. This is not hyperbole; it is the cold logic of realpolitik.</p><p>China has demonstrated this playbook before: high-speed rail, telecommunications infrastructure, electric vehicles. In each case, initial Western dismissiveness (&#8221;<em>they&#8217;re just copying</em>&#8221;) gave way to concern about quality (&#8221;<em>it&#8217;s cheap but inferior</em>&#8221;), then sudden realization that China wasn&#8217;t just competing but leading. The semiconductor challenge is orders of magnitude harder, but so is the level of resource commitment being applied.</p><p>The central strategic miscalculation of Western policy is viewing China&#8217;s semiconductor ambition through a commercial lens rather than understanding it as an existential state project. The CHIPS Act aimed at protecting a lead; China sees it as technological containment that validates their need for a completely autonomous stack. America is optimizing for commercial viability and return on investment. China is optimizing for strategic autonomy, accepting that some facilities might never make commercial sense in a pure market framework.</p><h2>Where Amodei&#8217;s logic fails</h2><p>Returning to Amodei&#8217;s nuclear weapons analogy: it reveals a fundamental category error. A nuclear weapon is a static, finished product &#8211; the threat is the object itself. Microchips, however, are dynamic tools of enablement. If Anthropic&#8217;s true competitive advantage lay in superior algorithms, safety protocols, and architectural brilliance, then a competitor&#8217;s access to H200s would be a hurdle, not an existential threat.</p><p>The vehemence of his opposition suggests a &#8220;Wizard of Oz&#8221; problem: he is shouting to protect the curtain because the magic behind it is less durable than the $40 billion valuation suggests. DeepSeek&#8217;s efficiency breakthrough was a &#8220;Sputnik moment&#8221; for the scaling-law true believers; it proved that the multi-billion-dollar compute moat is not a permanent fortress, but a temporary advantage. To be clear, while DeepSeek is a brilliant proof of algorithmic efficiency, it was still trained on thousands of Nvidia&#8217;s H800 chips, so while there may still be a need for companies like DeepSeek to use current high-end chip architecture, the deeper learning is that the quantity of chips required for a frontier model is much lower than American CEOs want investors to believe. </p><p>When the &#8216;compute moat&#8217; can be bypassed with clever math, the trillion-dollar hardware advantage starts to look like a stranded asset. But that&#8217;s a narrative that exposes Amodei&#8217;s hidden agenda.</p><p>By lobbying for export restrictions, Amodei is performing textbook rent-seeking: asking the state to handicap more efficient competitors under the guise of national security. This is the ultimate irony: the &#8220;safety&#8221; he advocates for is not for the public, but for a fragile business model that requires chips to remain scarce and expensive to justify its own astronomical costs.</p><h2>The Long Game</h2><p>The decoupling of the world&#8217;s two largest economies is not just a trade war; it is the birth of two parallel, competing innovation systems.</p><p>For decades, China followed the <em><strong>tao guang yang hui</strong></em> (&#38892;&#20809;&#20859;&#26214;) strategy which means &#8220;hide your strength, bide your time&#8221;, DeepSeek was the signal that the &#8220;biding&#8221; period is over. While Western analysts were busy dismissing Chinese capabilities, a new ecosystem was materializing in the &#8220;left field&#8221; of algorithmic efficiency and mature-node mastery.</p><p>The West continues to underestimate the &#8220;singular determination&#8221; of the Chinese model. When forced to choose between quarterly market gains and 15-year strategic positioning, Beijing chooses the latter every time. The West is optimizing for the next earnings call; China is optimizing for the next epoch.</p><p>The real question isn&#8217;t whether export controls will delay a Chinese LLM by six months. It&#8217;s whether American executives, by begging for government protection, are trading temporary relief for permanent displacement. In clinging to the current hardware paradigm, they are ensuring their irrelevance in the next one.</p><p>The semiconductor race is no longer about who makes the fastest chips today. It&#8217;s about who builds the independent, resilient ecosystem that defines &#8220;fast&#8221; tomorrow. By attempting to contain China, the U.S. has inadvertently provided the ultimate stimulus package for Chinese innovation&#8212;ensuring they are no longer constrained by foreign technology, or foreign goodwill.</p><p>Amodei stood on that Davos stage thinking he was defending the American technology lead in AI. In reality, he was advocating for the very conditions that will ensure American obsolescence.</p><h1>Read more here - follow-up article</h1><div class="digest-post-embed" data-attrs="{&quot;nodeId&quot;:&quot;b38b82b4-525c-4f6d-bcc0-32923c57d94a&quot;,&quot;caption&quot;:&quot;In my previous article &#8220;The Export Control Trap&#8221;, I argued US export controls were inadvertently funding China&#8217;s next computing paradigm. Within days of publication, new market data emerged that doesn&#8217;t just validate this thesis &#8211; rather it suggests the transformation is happening faster and broader than what even earlier pessimisti&#8230;&quot;,&quot;cta&quot;:&quot;Read full story&quot;,&quot;showBylines&quot;:true,&quot;size&quot;:&quot;lg&quot;,&quot;isEditorNode&quot;:true,&quot;title&quot;:&quot;Post-Script: The Great RISC-V Secession&quot;,&quot;publishedBylines&quot;:[{&quot;id&quot;:58943547,&quot;name&quot;:&quot;Dennis Lindwall&quot;,&quot;bio&quot;:&quot;Cybersecurity, fintech &amp; digital risk &#8594; global patterns reshaping strategy &amp; resilience. No hype, just passion.&quot;,&quot;photo_url&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/427cdd1e-621f-4390-86ed-13d4742e1e5d_500x500.jpeg&quot;,&quot;is_guest&quot;:false,&quot;bestseller_tier&quot;:null}],&quot;post_date&quot;:&quot;2026-01-29T18:27:40.351Z&quot;,&quot;cover_image&quot;:&quot;https://substackcdn.com/image/fetch/$s_!mT1S!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fce916b0d-b75d-4643-8bf1-89dc08309ac2_1024x608.png&quot;,&quot;cover_image_alt&quot;:null,&quot;canonical_url&quot;:&quot;https://www.tekk-talk.com/p/post-script-the-great-risc-v-secession&quot;,&quot;section_name&quot;:null,&quot;video_upload_id&quot;:null,&quot;id&quot;:186219985,&quot;type&quot;:&quot;newsletter&quot;,&quot;reaction_count&quot;:1,&quot;comment_count&quot;:0,&quot;publication_id&quot;:1784117,&quot;publication_name&quot;:&quot;TEKK Talk&quot;,&quot;publication_logo_url&quot;:&quot;https://substackcdn.com/image/fetch/$s_!W9ib!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F5ef2f0e7-7c57-4de7-a3a3-cabc82cad423_639x639.png&quot;,&quot;belowTheFold&quot;:true,&quot;youtube_url&quot;:null,&quot;show_links&quot;:null,&quot;feed_url&quot;:null}"></div><p></p><div class="footnote" data-component-name="FootnoteToDOM"><a id="footnote-1" href="#footnote-anchor-1" class="footnote-number" contenteditable="false" target="_self">1</a><div class="footnote-content"><p>https://techcrunch.com/2026/01/20/anthropics-ceo-stuns-davos-with-nvidia-criticism/</p><p></p></div></div><div class="footnote" data-component-name="FootnoteToDOM"><a id="footnote-2" href="#footnote-anchor-2" class="footnote-number" contenteditable="false" target="_self">2</a><div class="footnote-content"><p>https://cset.georgetown.edu/publication/no-permits-no-fabs/</p><p></p></div></div><div class="footnote" data-component-name="FootnoteToDOM"><a id="footnote-3" href="#footnote-anchor-3" class="footnote-number" contenteditable="false" target="_self">3</a><div class="footnote-content"><p>The report from CSET https://cset.georgetown.edu/publication/no-permits-no-fabs/  gives the numbers but point out that the pre-construction work and permitting is not part of their study - this report covers the point from starting the dig to starting production. The phases that proceed this is complicated and wrapped in red tape where it is laid bare in the US where the average time from 1990 to 2010 was 665 days (construction time) and the average over the 10 year that followed increased by 38% - 918 days.</p><p></p></div></div><div class="footnote" data-component-name="FootnoteToDOM"><a id="footnote-4" href="#footnote-anchor-4" class="footnote-number" contenteditable="false" target="_self">4</a><div class="footnote-content"><p>large fabs can consume in one year as much 50,000 homes [https://www.mckinsey.com/~/media/mckinsey/dotcom/client_service/operations/pdfs/bringing_fabenergyefficiency.ashx</p><p></p></div></div><div class="footnote" data-component-name="FootnoteToDOM"><a id="footnote-5" href="#footnote-anchor-5" class="footnote-number" contenteditable="false" target="_self">5</a><div class="footnote-content"><p>And can consume 4.8 million gallons of water per day [https://cwrrr.org/resources/analysis-reviews/8-things-you-should-know-about-water-and-semiconductors/]</p><p></p></div></div><div class="footnote" data-component-name="FootnoteToDOM"><a id="footnote-6" href="#footnote-anchor-6" class="footnote-number" contenteditable="false" target="_self">6</a><div class="footnote-content"><p>https://medium.com/@ASMLcompany/a-backgrounder-on-extreme-ultraviolet-euv-lithography-a5fccb8e99f4</p><p></p></div></div><div class="footnote" data-component-name="FootnoteToDOM"><a id="footnote-7" href="#footnote-anchor-7" class="footnote-number" contenteditable="false" target="_self">7</a><div class="footnote-content"><p>https://www.reuters.com/technology/sicarrier-says-its-tools-can-help-china-make-advanced-chips-2025-03-27/</p><p></p></div></div>]]></content:encoded></item><item><title><![CDATA[Winning Without Owning the Model]]></title><description><![CDATA[Why context and orchestration matter more than model ownership]]></description><link>https://www.tekk-talk.com/p/winning-without-owning-the-model</link><guid isPermaLink="false">https://www.tekk-talk.com/p/winning-without-owning-the-model</guid><dc:creator><![CDATA[Dennis Lindwall]]></dc:creator><pubDate>Fri, 19 Sep 2025 00:12:50 GMT</pubDate><enclosure url="https://substackcdn.com/image/fetch/$s_!rEZf!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F4d384632-cb5f-4333-abf8-e623902cbeba_1536x894.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!rEZf!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F4d384632-cb5f-4333-abf8-e623902cbeba_1536x894.png" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!rEZf!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F4d384632-cb5f-4333-abf8-e623902cbeba_1536x894.png 424w, https://substackcdn.com/image/fetch/$s_!rEZf!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F4d384632-cb5f-4333-abf8-e623902cbeba_1536x894.png 848w, https://substackcdn.com/image/fetch/$s_!rEZf!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F4d384632-cb5f-4333-abf8-e623902cbeba_1536x894.png 1272w, https://substackcdn.com/image/fetch/$s_!rEZf!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F4d384632-cb5f-4333-abf8-e623902cbeba_1536x894.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!rEZf!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F4d384632-cb5f-4333-abf8-e623902cbeba_1536x894.png" width="728.0000610351562" height="423.5000355060284" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/4d384632-cb5f-4333-abf8-e623902cbeba_1536x894.png&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:false,&quot;imageSize&quot;:&quot;normal&quot;,&quot;height&quot;:847,&quot;width&quot;:1456,&quot;resizeWidth&quot;:728.0000610351562,&quot;bytes&quot;:2218113,&quot;alt&quot;:&quot;Parallel data streams expanding and forming the shape of a face seen in profile&quot;,&quot;title&quot;:null,&quot;type&quot;:&quot;image/png&quot;,&quot;href&quot;:null,&quot;belowTheFold&quot;:false,&quot;topImage&quot;:true,&quot;internalRedirect&quot;:&quot;https://www.tekk-talk.com/i/173981052?img=https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F4d384632-cb5f-4333-abf8-e623902cbeba_1536x894.png&quot;,&quot;isProcessing&quot;:false,&quot;align&quot;:&quot;center&quot;,&quot;offset&quot;:false}" class="sizing-normal" alt="Parallel data streams expanding and forming the shape of a face seen in profile" title="Parallel data streams expanding and forming the shape of a face seen in profile" srcset="https://substackcdn.com/image/fetch/$s_!rEZf!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F4d384632-cb5f-4333-abf8-e623902cbeba_1536x894.png 424w, https://substackcdn.com/image/fetch/$s_!rEZf!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F4d384632-cb5f-4333-abf8-e623902cbeba_1536x894.png 848w, https://substackcdn.com/image/fetch/$s_!rEZf!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F4d384632-cb5f-4333-abf8-e623902cbeba_1536x894.png 1272w, https://substackcdn.com/image/fetch/$s_!rEZf!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F4d384632-cb5f-4333-abf8-e623902cbeba_1536x894.png 1456w" sizes="100vw" fetchpriority="high"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a></figure></div><p>A professor I follow on X, Ethan Mollick<a class="footnote-anchor" data-component-name="FootnoteAnchorToDOM" id="footnote-anchor-1" href="#footnote-1" target="_self">1</a> recently made an observation that struck a deep chord with me and that cuts to the heart of AI adoption as I&#8217;ve observed models are commercialised in enterprise environments. Mollick puts it bluntly (and I paraphrase): AI labs, run by coders, keep developing supercool tools for coding while leaving other forms of work stuck with generic chatbots<a class="footnote-anchor" data-component-name="FootnoteAnchorToDOM" id="footnote-anchor-2" href="#footnote-2" target="_self">2</a>. Unless you own a frontier model, he argues, your ability to build specialized AI for your field is limited<a class="footnote-anchor" data-component-name="FootnoteAnchorToDOM" id="footnote-anchor-3" href="#footnote-3" target="_self">3</a>. Coding shows us what is possible, but it also highlights the imbalance.</p><p>Take software development as an example. Developers now rely on copilots that debug, explain, and even generate entire libraries of code: sharp, precise, and tightly integrated into their workflows. Other professions, by contrast, are still working with more generic assistants: clever, but shallow. To bridge that gap, many companies have tried building custom AI agents. Yet the pace of change is so fast that by the time those agents are operationalised, the major labs&#8217; own platforms are already becoming agentic; exposing the weaknesses of simpler solutions. This &#8220;evolutionary glitch,&#8221; where custom tools risk obsolescence before they are fully deployed, often goes unnoticed by coders, who are themselves sprinting ahead with increasingly powerful copilots.</p><p>So why has coding become such a privileged domain? The reasons are structural, not temporary. Coders both build the AI systems and use them, creating a tight feedback loop where problems are identified and solved almost instantly. On top of that, code is an unusually rich kind of data: structured, public, and easily testable. Training data is abundant, and outcomes are unambiguous: a line of code either compiles or it doesn&#8217;t, a test suite either passes or fails. Contrast that with law, medicine, or marketing, where the data is private, messy, and the outcomes are often subjective or even contradictory. In those domains, two different answers can both be &#8220;right&#8221; or &#8220;wrong,&#8221; which makes progress slower and far harder to measure.</p><p>This combination has made coding the natural proving ground for frontier labs; that bias is unlikely to fade. The labs are run by people with coding backgrounds, building tools first and foremost for their own work. Each release improves not just the tools themselves but the labs&#8217; ability to build the next generation. It is a compounding advantage: a self-reinforcing loop where coders and labs accelerate together, leaving other domains struggling to keep pace.</p><h1>The New Rules of Strategic Advantage<br>From owning models to owning context</h1><p>For businesses outside of software development, this dynamic creates a series of strategic challenges that require rethinking how competitive advantage is built and how AI tools are implemented in domains that lack the strong feedback loops coders enjoy.</p><p>For years, the central technology debate has been whether to build or buy, but another truth is becoming apparent. With AI evolving so rapidly, organizations' in-house technical teams simply can't develop AI agents fast enough to compete with the evolution of agentic AI. While building custom chatbots on APIs may offer short-term benefits and cheaper than buying, organizations must ask themselves whether their projects can realistically keep pace with the more integrated and capable solutions that agentic AI platforms provide.</p><p>The AI labs' own agentic platforms are evolving too quickly, integrating memory, tools, and project management in ways that internal projects cannot easily replicate. Organizations will increasingly need to adopt those platforms, then focus on how to adapt them to their unique needs. This represents a major shift in the build-or-buy paradigm. The build-or-buy question has instead become: do we prioritize speed and capability, or do we insist on complete control and accept obsolescence? When the 'buy' option is evolving faster than internal development can match, competitive advantage moves from owning the technology to mastering its application.</p><p>This shift is also creating a two-tiered workforce. Coders are being supercharged by their copilots, while non-technical staff often use less powerful generic assistants. This risks creating a divide where technical employees become &#8220;super-agents&#8221; and everyone else remains &#8220;agent-lite,&#8221; at best. For business and technology leaders, the challenge is how to extend the benefits of AI beyond engineering by investing in training, redesigning roles, and embedding AI tools into every function so that the productivity boost is shared across the organization.</p><p>Yet the divide is less simply coders versus everyone else, and more about which domains already have mature AI-native tools and which do not. Design, writing, and sales are developing powerful copilots of their own &#8211; often powered by frontier models &#8211; though they evolve at different speeds and with uneven sophistication. Coding sprinted ahead first, but other knowledge domains are following on their own timelines.</p><p>The pace of adoption is shaped not only by technical feasibility but also by regulation, data sensitivity, and organizational risk tolerance. Highly regulated fields such as healthcare and finance move cautiously because of liability and compliance requirements, even when the technology is ready. By contrast, creative industries and marketing face fewer structural constraints and can experiment more freely, accelerating the emergence of domain-specific copilots. The divide, therefore, reflects not just technical maturity but the institutional barriers that determine how quickly AI can be woven into real-world workflows.</p><p>From my vantage point in cybersecurity, this divergence is especially critical. Here, AI is already accelerating both attack and defence. Coders with access to advanced copilots can rapidly probe systems, automate exploits, or generate polymorphic malware. Meanwhile, defenders can use the same tools to triage alerts, automate incident response, or hunt for vulnerabilities in ways that were previously impossible. The asymmetry emerges quickly: those with access to specialized copilots operate at an entirely different speed and scale than those relying on generic assistants. For business leaders, this is a preview of what may happen across many other domains if the gap between &#8220;super-agents&#8221; and &#8220;agent-lite&#8221; workers is left unaddressed.</p><p>The question of data sovereignty and intellectual property becomes even more critical in this new paradigm. To be effective, agentic AI tools need deep access to a company's internal knowledge: documents, emails, databases, even strategy memos. This creates an uncomfortable dependency where your most valuable data becomes the training ground for tools you don't control. How much of this knowledge can safely be exposed to third-party platforms? What protections are needed to ensure that proprietary information does not become part of someone else's ecosystem? For many companies, the answer will determine whether AI becomes a strategic advantage or a dangerous liability.</p><p>In practice, however, the question is not purely build or buy, but how to hybridize. Many organizations are already fine-tuning open-source models, layering RAG pipelines, and assembling custom agents on top of frontier APIs. The real challenge is deciding how much of the AI stack to customize versus consume as-a-service. Competitive advantage may rest not only in data, workflows, agility, and trust, but also in mastering this hybrid landscape</p><p>This reality reframes the build-vs-buy dilemma even further. If only a handful of labs can build frontier models, does that mean that true innovation is out of reach for everyone else? Not entirely. While companies may not be able to train their own frontier models, they can still innovate in ways that matter. The competitive game moves away from model building and toward something more fundamental: context building.</p><h1>The New Locus of Advantage: Context and Orchestration</h1><p>In Michael Porter&#8217;s terms, strategy has always been about the sources of sustainable advantage. In the age of agentic AI, owning the model is no longer that source. The new locus of strategic advantage is context: the data you control, the workflows you own, the agility you cultivate, the trust you build, and the way you orchestrate it all together. When frontier models are out of reach, competitive advantage shifts to these five pillars that define the &#8220;last mile&#8221; of AI adoption. Each is inseparable from a company&#8217;s identity and operations, and none can simply be outsourced to a lab.</p><p><strong>Proprietary Data.</strong> It is not just about having unique data, but having the highest-quality, most-structured, most-contextual data. This data becomes a company's non-replicable moat, as it creates the most valuable fine-tuning and Retrieval-Augmented Generation (RAG) applications. The companies that win will be those whose data creates unique insights that no competitor can replicate, regardless of which frontier model they're using.</p><p><strong>Workflow Mastery.</strong> This involves redesigning operations so they are natively AI-driven, not just bolted onto old processes. It means creating new forms of human-AI collaboration and developing organizational muscle memory for AI integration that competitors cannot easily copy. The advantage goes to companies that discover optimal divisions of labour between humans and AI agents, creating workflows that become increasingly difficult to replicate.</p><p><strong>Speed and Agility.</strong> If everyone has access to similar foundational models, the advantage goes to whoever can deploy, test, and iterate fastest. This favours companies with flat hierarchies and fast approval processes over those weighed down by legacy structures. In a world where AI capabilities evolve monthly, the ability to rapidly identify, integrate, and optimize new tools becomes a sustainable competitive advantage.</p><p><strong>Trust and Human Capital.</strong> The ability to deploy AI in ways that are secure, compliant, and explainable &#8211; particularly in regulated industries &#8211; will be a major differentiator. This must be paired with building a workforce where every employee is an effective AI operator, not just the engineers. Companies that can extend AI capabilities across their entire organization while maintaining security and compliance will have advantages that pure technology cannot provide.</p><p><strong>Orchestration. </strong>Alongside these four pillars lies another layer of advantage: how effectively an organization orchestrates its AI stack. Few will fully &#8216;build,&#8217; and fewer still can afford to only &#8216;buy.&#8217; The reality is almost always hybrid: fine-tuning open-source models, integrating APIs, layering retrieval-augmented pipelines, and constructing custom agents on top of frontier platforms. In this light, the competitive question is no longer purely &#8216;build versus buy,&#8217; but rather &#8216;assemble versus stagnate.&#8217;</p><p>But orchestration itself is not without risk. Combining open-source models, proprietary APIs, and custom agents can create complexity that overwhelms organizations without the right technical maturity. Poorly managed, this hybrid approach can fragment systems, inflate costs, and undermine performance, potentially leaving companies worse off than those that standardize on a single unified platform. Orchestration offers outsized rewards, but it demands governance, coordination, and a sober assessment of organizational limits. Done well, it becomes a multiplier; done poorly, it becomes a liability.</p><p>Taken together, these five pillars of context &#8211; data, workflows, speed, trust, and orchestration &#8211; represent where sustainable competitive advantage now lies. They require deep organizational commitment and can&#8217;t be easily copied or commoditized.</p><h1>The Broader Locus of Risk: Society and Governance</h1><p>While companies can adapt to this new competitive landscape, the concentration of advanced AI capabilities in a handful of labs raises broader questions that extend far beyond any single organization&#8217;s strategy. The structural bias toward coding reflects priorities that could have profound societal implications. These challenges play out across multiple levels: from ecosystem fragility, to geopolitical and corporate inequality, to questions of governance and, ultimately, to the future of human work and autonomy.</p><p><strong>The Problem of &#8220;AI Monoculture.&#8221; (Ecosystem-level fragility)</strong><br>When a handful of companies control the foundational models that power the world&#8217;s most advanced tools, society becomes vulnerable to an &#8220;AI monoculture.&#8221; This is similar to relying on a single crop, which can be wiped out by a single pest. If a single lab&#8217;s model has a bias, a flaw, or a &#8220;hallucination,&#8221; that problem could propagate across countless industries simultaneously. How do we ensure diversity and resilience in the AI ecosystem? Open-source efforts such as LLaMA or Mistral provide an important hedge against overconcentration. They can serve as fallback options and help diversify the ecosystem. But it would be a mistake to see them as full counterweights. The true moat for frontier labs lies less in raw model capability and more in the ecosystems they control. Platforms that integrate deeply into productivity suites, search engines, or proprietary plugin architectures create powerful lock-in effects. Once a company&#8217;s workflows depend on these integrations, the switching costs become prohibitive, even if open-source or alternative models reach comparable performance. For this reason, open-source models are likely to play important but limited roles such as filling niches, enabling experimentation, or serving as safeguards rather than replacing frontier labs as the primary engines of business adoption.</p><p><strong>The Widening &#8220;Intelligence Asymmetry.&#8221; (Geopolitical and corporate inequality)</strong><br>The initial information asymmetry in corporate governance, where management had better information than shareholders, is now evolving into a much deeper &#8220;intelligence asymmetry.&#8221; As some companies and countries gain access to more powerful, agentic AI tools than others, what are the ethical and economic implications? Will this deepen existing inequalities between the &#8220;AI-haves&#8221; and &#8220;AI-have-nots&#8221;?</p><p><strong>Regulation and Accountability. (Institutional governance)</strong><br>If an agentic AI makes a mistake that leads to a financial loss, a medical misdiagnosis, or a legal error, who is responsible? The company that used the tool? The developer of the frontier model? The human who supervised the agent? The increasingly complex and multi-step nature of agentic AI makes it difficult to assign clear accountability. This raises critical questions for regulators and legal systems that are ill-equipped to handle this new paradigm. It also underscores why owning the context (the data, workflows, governance, and human oversight surrounding AI use ) matters as much as the technology itself. Organizations that fail to define and control this context risk not only poor outcomes but also accountability and explainability gaps they will struggle to defend.</p><p><strong>The Future of Work and Human Autonomy. (Individual human impact)</strong><br>If the tools for human-AI collaboration remain generic for most fields while becoming highly specialized for coding, what does this mean for the future of non-technical professions? Will the &#8220;soft skills&#8221; of strategy, empathy, and creativity be sufficient to thrive, or will they too require specialized AI tools that may not be developed for years to come?</p><h1>Conclusion</h1><p>Mollick is right that coding has enjoyed a head start, and this advantage appears to be structural rather than temporary. But the mistake for businesses would be to chase the labs by trying to build frontier models internally, only to replicate the advantage that coders already enjoy today. The real opportunity is to recognize where advantage truly lies: in the five pillars of competitive strength &#8211; the data you control, the workflows you own, the speed and agility you cultivate, the trust and human capital you build, and the way you orchestrate the AI stack itself. Together these define the context in which models become useful.</p><p>In this transition toward agentic AI, you do not need to own the model to win; you need to own the context &#8211; and that means mastering the discipline of orchestration. Done well, orchestration amplifies the other pillars; done poorly, it becomes the weak link in the chain. Crucially, owning the context is also the foundation for accountability. Without it, organizations risk deploying powerful AI systems they cannot explain, defend, or govern.</p><p>The question I see most relevant for leaders, in both business and society, is whether we can build this context in ways that create value without reinforcing lock-in, widening inequalities, or introducing new vulnerabilities. The answer will determine not just who wins in the marketplace, but how AI shapes the future of resilience, innovation, responsibility, and human agency itself.</p><div class="footnote" data-component-name="FootnoteToDOM"><a id="footnote-1" href="#footnote-anchor-1" class="footnote-number" contenteditable="false" target="_self">1</a><div class="footnote-content"><p>On X under @emollick but also blogs here (worth subscribing): https://www.oneusefulthing.org/</p></div></div><div class="footnote" data-component-name="FootnoteToDOM"><a id="footnote-2" href="#footnote-anchor-2" class="footnote-number" contenteditable="false" target="_self">2</a><div class="footnote-content"><p>https://x.com/emollick/status/1967704853171638494</p></div></div><div class="footnote" data-component-name="FootnoteToDOM"><a id="footnote-3" href="#footnote-anchor-3" class="footnote-number" contenteditable="false" target="_self">3</a><div class="footnote-content"><p>https://x.com/emollick/status/1967706150218174958</p><p></p></div></div>]]></content:encoded></item><item><title><![CDATA[The Security Horizon]]></title><description><![CDATA[Seven Obscured Patterns in Recent UK Cyber Incidents]]></description><link>https://www.tekk-talk.com/p/the-security-horizon</link><guid isPermaLink="false">https://www.tekk-talk.com/p/the-security-horizon</guid><dc:creator><![CDATA[Dennis Lindwall]]></dc:creator><pubDate>Thu, 11 Sep 2025 16:16:08 GMT</pubDate><enclosure url="https://substackcdn.com/image/fetch/$s_!XLvL!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F19732a67-3214-4ece-94ab-afc6fe2cad17_1535x599.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!XLvL!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F19732a67-3214-4ece-94ab-afc6fe2cad17_1535x599.png" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!XLvL!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F19732a67-3214-4ece-94ab-afc6fe2cad17_1535x599.png 424w, https://substackcdn.com/image/fetch/$s_!XLvL!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F19732a67-3214-4ece-94ab-afc6fe2cad17_1535x599.png 848w, https://substackcdn.com/image/fetch/$s_!XLvL!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F19732a67-3214-4ece-94ab-afc6fe2cad17_1535x599.png 1272w, https://substackcdn.com/image/fetch/$s_!XLvL!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F19732a67-3214-4ece-94ab-afc6fe2cad17_1535x599.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!XLvL!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F19732a67-3214-4ece-94ab-afc6fe2cad17_1535x599.png" width="1456" height="568" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/19732a67-3214-4ece-94ab-afc6fe2cad17_1535x599.png&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:568,&quot;width&quot;:1456,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:1722159,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:&quot;image/png&quot;,&quot;href&quot;:null,&quot;belowTheFold&quot;:false,&quot;topImage&quot;:true,&quot;internalRedirect&quot;:&quot;https://www.tekk-talk.com/i/173357949?img=https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F19732a67-3214-4ece-94ab-afc6fe2cad17_1535x599.png&quot;,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!XLvL!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F19732a67-3214-4ece-94ab-afc6fe2cad17_1535x599.png 424w, https://substackcdn.com/image/fetch/$s_!XLvL!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F19732a67-3214-4ece-94ab-afc6fe2cad17_1535x599.png 848w, https://substackcdn.com/image/fetch/$s_!XLvL!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F19732a67-3214-4ece-94ab-afc6fe2cad17_1535x599.png 1272w, https://substackcdn.com/image/fetch/$s_!XLvL!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F19732a67-3214-4ece-94ab-afc6fe2cad17_1535x599.png 1456w" sizes="100vw" fetchpriority="high"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a></figure></div><p>While company executives reassured stakeholders with familiar refrains of 'swift action' and 'no card data lost' throughout 2025's major breaches, the technical reality being managed several layers below told a different story. The gap between public messaging and operational truth has never been wider, and that gap is where the next generation of cyber risk is taking root. Let&#8217;s discuss:</p><p>The first half of 2025 marked a significant evolution in the cybersecurity threat landscape. Notwithstanding the profound evolution brought about by AI (some might argue <a href="https://www.tekk-talk.com/p/the-new-battlefield-ai-in-cyber-attacks">&#8216;revolution&#8217; &#8211; as discussed before</a>) some of the UK's most recognisable organisations, ranging from supermarkets to luxury retail, automotive, and telecommunications, have faced public and high-impact cyberattacks.</p><p>News headlines, as usual, focused on outages, customer disruption, and speculation about ransom demands, while public statements leaned on standardised scripts: "swift action," "no card data lost," "services being restored." Yet the operational impacts ran far deeper, carrying consequences not only in direct financial losses but also in reputational damage and longer-term strategic erosion.</p><p>Peeling back the layers to expose the underlying (in)security trends, a more consequential story emerges. These incidents were not merely uniform ransomware repeats as seen in years past. Instead, they reveal threat actors testing new approaches, defenders making unorthodox choices &#8211; some wise, some less so &#8211; and systemic weak points being exploited in ways that boards and CISOs cannot afford to ignore. Yet the familiar security theatre with its focus on reassurance and compliance messaging obscures these deeper patterns that should be driving strategic cybersecurity decisions.</p><p>If incident response is instrumental to the theatre, meaning reactive processes remodelled and baselined on established playbooks and past experiences, then these events are the cracks in the stage. They show us where the script is failing and where the performance is lacking authenticity.</p><p>The reflections that follow are drawn from a comparative analysis of five high-profile UK incidents in 2025: Marks &amp; Spencer, the Co-operative Group, Harrods, Jaguar Land Rover, and Colt Technology Services. Each was different in industry, scale, and impact, but together they provide a useful cross-section of how contemporary threat actors operate, where defenders are pressured into new decisions, and what systemic weaknesses are repeatedly exposed. The aim here is not to retell headlines, re-analyse attack timelines, or speculate on attribution, but to surface what I can identify as the opaque trends that cut across cases; i.e. the real signals that matter for CISOs, boards, and risk leaders setting strategy beyond the current incident-response cycle.</p><p>Together, they map the security horizon: the terrain on which the next phase of cyber risk will play out.</p><h3>1. Identity resets are now critical infrastructure</h3><p>In three of the retail cases, the fulcrum was not a sophisticated exploit but a simple process: password resets and MFA method changes handled by help desks. Attackers socially engineered help-desk staff to reset credentials or remove MFA devices, and with that, the entire defensive stack was bypassed. This again highlights the obvious: humans remain a weak link. The intention is always benign; reduce employee frustration, avoid IT being a hindrance to operations, reduce support friction, but consequences are far reaching. The challenge lies as often in cybersecurity in Identity Management.</p><p>We are used to thinking of identity as a control layer. It is time to accept that identity reset workflows are the control plane. They carry the same systemic weight that firewalls once did. Yet in many enterprises, these flows are delegated to outsourcers, measured on call-handling time, and protected with little more than caller-ID heuristics. Playbooks that can be bypassed in the interest of &#8220;smoothing out customer experience&#8221;.</p><p>The strategic horizon demands a reclassification: identity resets must be treated as Tier-0 infrastructure, requiring dual approval, out-of-band verification, and continuous audit.</p><h3>2. MSPs and software supply chains are today's perimeter</h3><p>We no longer own our entire defensive perimeter (if you can find it &#8211; the traditional perimeter is vanishing<a class="footnote-anchor" data-component-name="FootnoteAnchorToDOM" id="footnote-anchor-1" href="#footnote-1" target="_self">1</a>). In the Marks &amp; Spencer case, third-party accounts were the suspected vector. Co-op, Harrods, and Jaguar Land Rover all shared a common denominator in a managed service provider. Colt's incident, by contrast, came via an unpatched SharePoint zero-day.</p><p>These two threads &#8211; managed service providers and software dependencies &#8211; are often discussed separately, but they represent the same phenomenon: we have externalised our perimeter. Whether through direct outsourcing or software supply chain, we have handed trust to entities whose controls and patch cycles we do not govern.</p><p>&#8216;<a href="https://www.tekk-talk.com/p/the-240-billion-cybersecurity-lie">Cyber Essentials&#8217; badges and supplier security questionnaires cannot carry that weight, even SOC reports won&#8217;t meet these standards</a>. Contracts must demand technical parity: the same reset policies, the same telemetry, the same patch urgency that we would demand of ourselves is encoded in contracts, SLAs and OKRs. SBOMs and signed builds must become routine, not academic. The perimeter is not where your firewall sits; it is wherever your supplier or your code is weakest.</p><h3>3. Extortion without detonation</h3><p>Traditional ransomware economics relied on encrypting data and offering a decryption key, or double extortion (ransoming both for providing decryption key and not to sell stolen data). But in both Co-op and Colt, we saw the leverage created without ever triggering encryption. At Co-op, a membership database provided the bargaining chip. At Colt, attackers exfiltrated contracts and network diagrams and auctioned them to the highest bidder, indicating a shift in the underlying economic model.</p><p>These incidents demonstrate that detonation is optional. Exfiltrating even a small but leverage-dense dataset is enough to drive extortion. This shifts the impact horizon: we cannot only measure exposure in terms of encryption events. We must elevate targeted exfiltration to the same category of risk.</p><p>The strategic horizon here is clear: data leakage detection, segmentation of crown-jewel datasets, and rapid detection of unusual export patterns matter more than ever. The next extortion may require only a gigabyte, not a petabyte.</p><h3>4. Criminal brand fluidity creates operational uncertainty</h3><p>DragonForce, Scattered Spider, WarLock: these are names and banners blurred across these incidents and classical delineation is no longer possible. In some cases, the same tools and TTPs appeared under multiple brands. In others, attackers recycled old screenshots to claim responsibility. This brand fluidity is more than a technical footnote; it fundamentally undermines negotiation, insurance claims, and regulatory reporting.</p><p>When attribution is structurally uncertain, critical questions become unanswerable: Who actually holds the encryption key? Who owns the stolen data? Who can enforce deletion? The old phrase applies: "Hackers always lie, but that doesn't mean they're 100% wrong."</p><p>For CISOs, this demands new playbooks that don't assume coherent branding. Always analyse samples of stolen data to establish origin, never discard the possibly that the data leak involved insiders, prepare for re-extortion attempts from different groups claiming the same data, and consider planting canary documents with beaconed strings to detect resale. Pressuring insurers and regulators to acknowledge affiliate fluidity as a systemic uncertainty is equally critical. The days of neat attribution categories are ending and aside from possibly help clarifying broad threat-actor motivation the attribution itself becomes an academic exercise.</p><h3>5. Kill-switches are part of continuity planning</h3><p>Two of the UK incidents stand out not for what attackers did, but for what defenders chose. Harrods immediately restricted internet access across its estate, accepting some payment disruption in order to cut off exfiltration. Jaguar Land Rover initiated a global IT shutdown, halting production to contain suspected identity compromise. Expensive, yes, effective at reducing longer operational outage and customer disruption and more profound economic losses &#8211; resounding yes.</p><p>These are noteworthy decisions. Historically, boards have flinched at the idea of self-disruption, preferring to keep systems alive "until we know for sure." But Harrods and JLR show that mature defenders are willing to pull the plug first, if it prevents systemic compromise. JLR's production remains disrupted more than two weeks after the attack was made public (at the publishing of this article), with material production impacts expected to last until October. Marks &amp; Spencer took 15 weeks to fully restore online services, with online orders completely halted for over six weeks. No surprise that disruption costs money but when a broader comprise is (inadvertently) allowed through analysis-paralysis, the more expensive the recovery becomes. Some painful decisions needs to be taken fast.</p><p>This demands a rethinking of continuity planning: cyber kill-switches are no longer taboo; they must be designed, rehearsed, and integrated into business continuity drills. Continuity with constrained services is preferable to continuity with compromised trust.</p><h3>6. Sector-specific chokepoints are the true attack surface</h3><p>A more profound insight here is that the attacks reveal that the true prize isn't always the data; it's the choke points. This profoundly requires defenders and security architects to reconsider the traditional cybersecurity operational models. The choice of what the threat actors chose to disrupt was not random. At Marks &amp; Spencer, it was e-commerce and fulfilment. At Co-op, stock and replenishment. At Harrods, payments. At JLR, manufacturing ERP and MES. At Colt, customer portals and APIs.</p><p>Attackers are mapping and exploiting sector-specific chokepoints: the points where downtime translates most quickly into leverage. Too often, boards and CISOs still talk in terms of "crown jewels" or &#8220;critical business services&#8221; (customer data, financials, or overly broadly defined services) while under-investing in chokepoints that keep the business running. And perhaps what makes these types of targeted attacks so successful is that the business continuity plans are often woefully inadequate to address these choke-point disruptions, including the more simple supply-chain disruption.</p><p>The strategic horizon requires threat modelling around your chokepoints which must include considerations of internal business services and operations but also consider dependencies on suppliers or supply chains to enable those business services to operate. If you were an adversary, where would you apply pressure? Red teams must be tasked with those scenarios, not generic pen tests &#8211; moving from traditional penetration testing to offensive security.</p><h3>7. Security theatre still blinds boards to structural risk</h3><p>Public communications continue to emphasise what was not lost: &#8220;no card data,&#8221; &#8220;no passwords.&#8221; These statements are technically true, but they obscure what really happened: identity integrity was broken, suppliers represented attack paths, and small data sets provided leverage.</p><p>This theatre is not malicious. It is how organisations protect reputations, reassure customers, and meet regulatory minimums. But it also conditions boards to focus on the wrong things. The headlines and dashboards hide the shifts that matter most: the fragility of identity and authentication, the expansion of the de facto perimeter through SaaS and outsourced services, and the reality that extortion no longer requires encryption or ransomware.</p><p>The strategic horizon requires moving companies to rethink the traditional cybersecurity playbooks and link them more profoundly to the company&#8217;s structural resilience. For boards, that means demanding a different class of answers from CISOs:</p><ul><li><p>Are cyber kill-switches designed and rehearsed as part of continuity? What are the business continuity considerations for deliberately enacted production halts?</p></li><li><p>Do suppliers operate under the same access parity and telemetry as in-house staff? How well do contracts empower IT or Cybersecurity to enforce operational compliance or service level agreements with vendors, or even red-lining engagements leading to vendor exits?</p></li><li><p>Is our SDLC aligned to SBOM-first assurance and days-level patch cycles for internet-facing assets? Traditional maintenance windows don&#8217;t grown on trees and although internally change windows can be forced, how aligned are the vendors to patch on that same accelerated patch timeline?</p></li><li><p>Do we understand risks in software lineage at ERP/PLM edges, not just in &#8220;crown jewels&#8221;? As experienced by JLR and Co-op; how aligned are the BCPs for disrupting Online Ordering/ERP/PLM/PDM safely, paired with graceful operational fallbacks?</p></li></ul><h3>The Security Horizon</h3><p>With hundreds of incidents reported weekly, it is tempting to treat these incidents as another round in the endless cycle of breaches and press releases. But the hidden currents show something deeper and more profound that must not be brushed aside. There are strategic insights here that we must not overlook. The problem is not just in the malware or the ransom note. It lies in how we build, delegate, and measure our defences.</p><p>Identity resets must be elevated to critical infrastructure. Supplier and software controls must be treated as sovereign perimeters. Kill-switches must be normalised in continuity planning. Chokepoints must be mapped and defended as rigorously as financial systems.</p><p>For boardrooms, this shift can begin with a few sharp questions:</p><ol><li><p>Identity: How do we verify that password, secrets (incl. API keys) and MFA resets &#8211; whether handled in-house or by a provider &#8211; are subject to the same level of scrutiny and audit as our financial approvals?</p></li><li><p>Suppliers: What visibility do we have into our MSPs and software vendors, and can we independently verify their security posture rather than relying on badges or contracts?</p></li><li><p>Continuity: If we had to pull a cyber kill-switch tomorrow, could we still operate in a constrained but trusted state, and when was the last time we rehearsed it?</p></li></ol><p>These are not technical questions. They are questions of governance, resilience, and strategy. And the answers will determine whether organisations continue to perform the same play, or begin to re-engineer the stage on which it unfolds.</p><p>If we fail to make these shifts, the theatre will continue: polished statements on the surface, structural weakness underneath. The choice before us is whether to keep rehearsing the script or start engineering the stage itself.</p><div class="footnote" data-component-name="FootnoteToDOM"><a id="footnote-1" href="#footnote-anchor-1" class="footnote-number" contenteditable="false" target="_self">1</a><div class="footnote-content"><p>&#8217;we had pressure of people inside the corporation needing to access systems outside, and people outside the corporation network needing to access systems that were being run by the corporation.&#8217; Eventually, &#8216;the perimeter started to become a little bit grey because at some point you are not using systems that are nicely delineated&#8217; (Paul Dorey, interview, 2022)&#8221; Spencer, M. and Pizio, D. (2023). The de-perimeterisation of information security: the jericho forum, zero trust, and narrativity. Social Studies of Science, 54(5), 655-677. https://doi.org/10.1177/03063127231221107</p><p></p></div></div>]]></content:encoded></item><item><title><![CDATA[The $240 Billion Cybersecurity Lie]]></title><description><![CDATA[And you and I are part of it.]]></description><link>https://www.tekk-talk.com/p/the-240-billion-cybersecurity-lie</link><guid isPermaLink="false">https://www.tekk-talk.com/p/the-240-billion-cybersecurity-lie</guid><dc:creator><![CDATA[Dennis Lindwall]]></dc:creator><pubDate>Thu, 21 Aug 2025 12:50:12 GMT</pubDate><enclosure url="https://images.unsplash.com/photo-1693829957352-b498cc36dc2c?crop=entropy&amp;cs=tinysrgb&amp;fit=max&amp;fm=jpg&amp;ixid=M3wzMDAzMzh8MHwxfHNlYXJjaHwyNTJ8fGNoYWxsZW5nZXxlbnwwfHx8fDE3NTU3ODA0NzR8MA&amp;ixlib=rb-4.1.0&amp;q=80&amp;w=1080" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p></p><div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://images.unsplash.com/photo-1693829957352-b498cc36dc2c?crop=entropy&amp;cs=tinysrgb&amp;fit=max&amp;fm=jpg&amp;ixid=M3wzMDAzMzh8MHwxfHNlYXJjaHwyNTJ8fGNoYWxsZW5nZXxlbnwwfHx8fDE3NTU3ODA0NzR8MA&amp;ixlib=rb-4.1.0&amp;q=80&amp;w=1080" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://images.unsplash.com/photo-1693829957352-b498cc36dc2c?crop=entropy&amp;cs=tinysrgb&amp;fit=max&amp;fm=jpg&amp;ixid=M3wzMDAzMzh8MHwxfHNlYXJjaHwyNTJ8fGNoYWxsZW5nZXxlbnwwfHx8fDE3NTU3ODA0NzR8MA&amp;ixlib=rb-4.1.0&amp;q=80&amp;w=1080 424w, https://images.unsplash.com/photo-1693829957352-b498cc36dc2c?crop=entropy&amp;cs=tinysrgb&amp;fit=max&amp;fm=jpg&amp;ixid=M3wzMDAzMzh8MHwxfHNlYXJjaHwyNTJ8fGNoYWxsZW5nZXxlbnwwfHx8fDE3NTU3ODA0NzR8MA&amp;ixlib=rb-4.1.0&amp;q=80&amp;w=1080 848w, https://images.unsplash.com/photo-1693829957352-b498cc36dc2c?crop=entropy&amp;cs=tinysrgb&amp;fit=max&amp;fm=jpg&amp;ixid=M3wzMDAzMzh8MHwxfHNlYXJjaHwyNTJ8fGNoYWxsZW5nZXxlbnwwfHx8fDE3NTU3ODA0NzR8MA&amp;ixlib=rb-4.1.0&amp;q=80&amp;w=1080 1272w, https://images.unsplash.com/photo-1693829957352-b498cc36dc2c?crop=entropy&amp;cs=tinysrgb&amp;fit=max&amp;fm=jpg&amp;ixid=M3wzMDAzMzh8MHwxfHNlYXJjaHwyNTJ8fGNoYWxsZW5nZXxlbnwwfHx8fDE3NTU3ODA0NzR8MA&amp;ixlib=rb-4.1.0&amp;q=80&amp;w=1080 1456w" sizes="100vw"><img src="https://images.unsplash.com/photo-1693829957352-b498cc36dc2c?crop=entropy&amp;cs=tinysrgb&amp;fit=max&amp;fm=jpg&amp;ixid=M3wzMDAzMzh8MHwxfHNlYXJjaHwyNTJ8fGNoYWxsZW5nZXxlbnwwfHx8fDE3NTU3ODA0NzR8MA&amp;ixlib=rb-4.1.0&amp;q=80&amp;w=1080" width="4500" height="4500" data-attrs="{&quot;src&quot;:&quot;https://images.unsplash.com/photo-1693829957352-b498cc36dc2c?crop=entropy&amp;cs=tinysrgb&amp;fit=max&amp;fm=jpg&amp;ixid=M3wzMDAzMzh8MHwxfHNlYXJjaHwyNTJ8fGNoYWxsZW5nZXxlbnwwfHx8fDE3NTU3ODA0NzR8MA&amp;ixlib=rb-4.1.0&amp;q=80&amp;w=1080&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:4500,&quot;width&quot;:4500,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:null,&quot;alt&quot;:&quot;a digital image of a person's head and a light bulb&quot;,&quot;title&quot;:null,&quot;type&quot;:&quot;image/jpg&quot;,&quot;href&quot;:null,&quot;belowTheFold&quot;:false,&quot;topImage&quot;:true,&quot;internalRedirect&quot;:null,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="a digital image of a person's head and a light bulb" title="a digital image of a person's head and a light bulb" srcset="https://images.unsplash.com/photo-1693829957352-b498cc36dc2c?crop=entropy&amp;cs=tinysrgb&amp;fit=max&amp;fm=jpg&amp;ixid=M3wzMDAzMzh8MHwxfHNlYXJjaHwyNTJ8fGNoYWxsZW5nZXxlbnwwfHx8fDE3NTU3ODA0NzR8MA&amp;ixlib=rb-4.1.0&amp;q=80&amp;w=1080 424w, https://images.unsplash.com/photo-1693829957352-b498cc36dc2c?crop=entropy&amp;cs=tinysrgb&amp;fit=max&amp;fm=jpg&amp;ixid=M3wzMDAzMzh8MHwxfHNlYXJjaHwyNTJ8fGNoYWxsZW5nZXxlbnwwfHx8fDE3NTU3ODA0NzR8MA&amp;ixlib=rb-4.1.0&amp;q=80&amp;w=1080 848w, https://images.unsplash.com/photo-1693829957352-b498cc36dc2c?crop=entropy&amp;cs=tinysrgb&amp;fit=max&amp;fm=jpg&amp;ixid=M3wzMDAzMzh8MHwxfHNlYXJjaHwyNTJ8fGNoYWxsZW5nZXxlbnwwfHx8fDE3NTU3ODA0NzR8MA&amp;ixlib=rb-4.1.0&amp;q=80&amp;w=1080 1272w, https://images.unsplash.com/photo-1693829957352-b498cc36dc2c?crop=entropy&amp;cs=tinysrgb&amp;fit=max&amp;fm=jpg&amp;ixid=M3wzMDAzMzh8MHwxfHNlYXJjaHwyNTJ8fGNoYWxsZW5nZXxlbnwwfHx8fDE3NTU3ODA0NzR8MA&amp;ixlib=rb-4.1.0&amp;q=80&amp;w=1080 1456w" sizes="100vw" fetchpriority="high"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a><figcaption class="image-caption">Photo by <a href="https://unsplash.com/@ornarin">Anastasiia Ornarin</a> on <a href="https://unsplash.com">Unsplash</a></figcaption></figure></div><p>The cybersecurity industry is perpetuating a $240 billion lie<a class="footnote-anchor" data-component-name="FootnoteAnchorToDOM" id="footnote-anchor-1" href="#footnote-1" target="_self">1</a>, and you and I are part of it.</p><p>We've been inside the boardrooms, selling business leadership on the virtues of controls and compliance to turn metrics green, focusing on isolated outcomes over holistic processes. That budgets should go to certifications and tooling, not capabilities. That risk and control registers lead to resilience. That saying a "quick yes" to engineering changes is more valuable than enabling growth sustainably.</p><p>What's actually happening: We're selling theatre instead of transformation. We're fighting for budgets and resources instead of capabilities that enable business growth.</p><p>The Theatre in Action</p><p>Walk into any boardroom and watch the performance unfold:</p><ul><li><p>"We need SOC 2 certification to win this deal" - while ignoring whether the business can actually protect what matters</p></li><li><p>"Our risk register shows we're compliant" - while executives approve risks they fundamentally don't understand</p></li><li><p>"Security prevented 10,000 attacks this month" - a meaningless metric that sounds expensive to maintain</p></li></ul><p>Persisting in treating security like cost centres and reacting to threats instead of proactively building strategic security will not only hold us back but is a losing game. The winning game belongs to those who start treating security like finance - as a discipline that enables growth through intelligent risk management. Those companies will be eating their competitors' lunch.</p><h3><strong>The Inflection Point</strong></h3><p>What we&#8217;re advocating isn&#8217;t about tweaking security programs. We're at a fundamental inflection point where cybersecurity transforms from cost centre to competitive weapon<a class="footnote-anchor" data-component-name="FootnoteAnchorToDOM" id="footnote-anchor-2" href="#footnote-2" target="_self">2</a>. It&#8217;s that, or you get left behind entirely<a class="footnote-anchor" data-component-name="FootnoteAnchorToDOM" id="footnote-anchor-3" href="#footnote-3" target="_self">3</a>. The consequence of that means falling victim to a downward spiral where lack of strategic foresight leads to more reliance on outsourced security services and SaaS solutions<a class="footnote-anchor" data-component-name="FootnoteAnchorToDOM" id="footnote-anchor-4" href="#footnote-4" target="_self">4</a> that become misaligned to the company&#8217;s strategic objectives &#8211; perpetuating cyber as tools and processes and not as strategic assets for business value creation.</p><p>The organizations winning this transformation understand something their competitors don't: <strong>security posture creates genuine business differentiation.</strong> While others buy compliance theatre, they're using security to:</p><ul><li><p>Access regulated markets like healthcare and finance that require demonstrated security maturity</p></li><li><p>Command premium pricing from security-conscious customers</p></li><li><p>Accelerate time-to-market by building security in, not bolting it on</p></li><li><p>Win enterprise contracts that require demonstrated, not documented, security capabilities</p></li></ul><p>This is the same evolution credit risk management underwent decades ago - from "preventing bad loans" to "optimizing risk-return profiles for maximum profitable growth".</p><h3><strong>What Transformation Looks Like</strong></h3><p>Instead of asking "What did we get for our $185M security investment last year?" forward-thinking leaders ask "What market opportunities did our security posture unlock this quarter?"</p><p>Instead of presenting security as "We need $10M to prevent breaches," they frame it as "Our security maturity enables the $50M remote workforce strategy and qualifies us for Fortune 500 supplier requirements our competitors can't meet."</p><p>The frame defines everything. And the companies setting the right frame first are building competitive moats while their peers burn budget on security theatre.</p><h3><strong>The Winners and Losers</strong></h3><p><strong>Winners</strong>: Organizations that recognize cybersecurity as a business enabler and treat it with the same strategic rigor as finance or operations. They'll dominate markets while competitors explain breach headlines to customers.</p><p><strong>Losers</strong>: Companies stuck in the compliance and control tick-box mindset, forever chasing the next certification while missing the fundamental business transformation happening around them.</p><p>The $240 billion cybersecurity industry has a choice: continue profiting from confusion and theatre, or demand the business transformation that creates real value. I know where I stand.</p><p>Business leaders have the same choice: keep buying the lie, or recognize that security done right isn't a cost to be minimized - it's a competitive advantage to be maximized.</p><p><strong>The question isn't whether this transformation will happen. The question is whether you'll lead it or be left behind by it.</strong></p><div class="footnote" data-component-name="FootnoteToDOM"><a id="footnote-1" href="#footnote-anchor-1" class="footnote-number" contenteditable="false" target="_self">1</a><div class="footnote-content"><p>Gartner forecast that 2026 Cybersecurity spend will be approximately $240 billion (<a href="https://www.gartner.com/en/newsroom/press-releases/2025-07-29-gartner-forecasts-worldwide-end-user-spending-on-information-security-to-total-213-billion-us-dollars-in-2025#:~:text=Worldwide%20end%2Duser%20spending%20on%20information%20security%20is%20projected%20to,2026%20to%20total%20$240%20billion.">Gartner News</a>)</p></div></div><div class="footnote" data-component-name="FootnoteToDOM"><a id="footnote-2" href="#footnote-anchor-2" class="footnote-number" contenteditable="false" target="_self">2</a><div class="footnote-content"><p>This invokes a need to reframe our security needs from costs and obligations to enablers and strategic assets (<a href="https://www.tekk-talk.com/p/never-lead-with-cost">Article: Never lead with cost</a>) </p></div></div><div class="footnote" data-component-name="FootnoteToDOM"><a id="footnote-3" href="#footnote-anchor-3" class="footnote-number" contenteditable="false" target="_self">3</a><div class="footnote-content"><p>The future of Cybersecurity is evolving on several planes but changes to technology through generative AI and AI enabled threat actors will deeply exacerbate this trajectory (<a href="https://www.tekk-talk.com/p/the-new-battlefield-ai-in-cyber-attacks">Article: The new battlefield</a>)  </p></div></div><div class="footnote" data-component-name="FootnoteToDOM"><a id="footnote-4" href="#footnote-anchor-4" class="footnote-number" contenteditable="false" target="_self">4</a><div class="footnote-content"><p>Organizations struggling with supply chain vulnerabilities and limited internal security capabilities increasingly default to third-party solutions, creating deeper dependencies rather than building strategic security assets. (<a href="https://reports.weforum.org/docs/WEF_Global_Cybersecurity_Outlook_2025.pdf">Paper: World Economic Forum 2025 / Accenture</a>: p. 24, 25)</p></div></div>]]></content:encoded></item><item><title><![CDATA[Never Lead with Cost]]></title><description><![CDATA[From Cost Avoidance to Opportunity Creation]]></description><link>https://www.tekk-talk.com/p/never-lead-with-cost</link><guid isPermaLink="false">https://www.tekk-talk.com/p/never-lead-with-cost</guid><dc:creator><![CDATA[Dennis Lindwall]]></dc:creator><pubDate>Wed, 11 Jun 2025 22:58:21 GMT</pubDate><enclosure url="https://images.unsplash.com/photo-1604948501466-4e9c339b9c24?crop=entropy&amp;cs=tinysrgb&amp;fit=max&amp;fm=jpg&amp;ixid=M3wzMDAzMzh8MHwxfHNlYXJjaHwxMHx8cmlza3xlbnwwfHx8fDE3NDk2MDc5MDN8MA&amp;ixlib=rb-4.1.0&amp;q=80&amp;w=1080" length="0" type="image/jpeg"/><content:encoded><![CDATA[<div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://images.unsplash.com/photo-1604948501466-4e9c339b9c24?crop=entropy&amp;cs=tinysrgb&amp;fit=max&amp;fm=jpg&amp;ixid=M3wzMDAzMzh8MHwxfHNlYXJjaHwxMHx8cmlza3xlbnwwfHx8fDE3NDk2MDc5MDN8MA&amp;ixlib=rb-4.1.0&amp;q=80&amp;w=1080" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://images.unsplash.com/photo-1604948501466-4e9c339b9c24?crop=entropy&amp;cs=tinysrgb&amp;fit=max&amp;fm=jpg&amp;ixid=M3wzMDAzMzh8MHwxfHNlYXJjaHwxMHx8cmlza3xlbnwwfHx8fDE3NDk2MDc5MDN8MA&amp;ixlib=rb-4.1.0&amp;q=80&amp;w=1080 424w, https://images.unsplash.com/photo-1604948501466-4e9c339b9c24?crop=entropy&amp;cs=tinysrgb&amp;fit=max&amp;fm=jpg&amp;ixid=M3wzMDAzMzh8MHwxfHNlYXJjaHwxMHx8cmlza3xlbnwwfHx8fDE3NDk2MDc5MDN8MA&amp;ixlib=rb-4.1.0&amp;q=80&amp;w=1080 848w, https://images.unsplash.com/photo-1604948501466-4e9c339b9c24?crop=entropy&amp;cs=tinysrgb&amp;fit=max&amp;fm=jpg&amp;ixid=M3wzMDAzMzh8MHwxfHNlYXJjaHwxMHx8cmlza3xlbnwwfHx8fDE3NDk2MDc5MDN8MA&amp;ixlib=rb-4.1.0&amp;q=80&amp;w=1080 1272w, https://images.unsplash.com/photo-1604948501466-4e9c339b9c24?crop=entropy&amp;cs=tinysrgb&amp;fit=max&amp;fm=jpg&amp;ixid=M3wzMDAzMzh8MHwxfHNlYXJjaHwxMHx8cmlza3xlbnwwfHx8fDE3NDk2MDc5MDN8MA&amp;ixlib=rb-4.1.0&amp;q=80&amp;w=1080 1456w" sizes="100vw"><img src="https://images.unsplash.com/photo-1604948501466-4e9c339b9c24?crop=entropy&amp;cs=tinysrgb&amp;fit=max&amp;fm=jpg&amp;ixid=M3wzMDAzMzh8MHwxfHNlYXJjaHwxMHx8cmlza3xlbnwwfHx8fDE3NDk2MDc5MDN8MA&amp;ixlib=rb-4.1.0&amp;q=80&amp;w=1080" width="6000" height="4000" data-attrs="{&quot;src&quot;:&quot;https://images.unsplash.com/photo-1604948501466-4e9c339b9c24?crop=entropy&amp;cs=tinysrgb&amp;fit=max&amp;fm=jpg&amp;ixid=M3wzMDAzMzh8MHwxfHNlYXJjaHwxMHx8cmlza3xlbnwwfHx8fDE3NDk2MDc5MDN8MA&amp;ixlib=rb-4.1.0&amp;q=80&amp;w=1080&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:4000,&quot;width&quot;:6000,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:null,&quot;alt&quot;:&quot;grayscale photo of person holding glass&quot;,&quot;title&quot;:null,&quot;type&quot;:&quot;image/jpg&quot;,&quot;href&quot;:null,&quot;belowTheFold&quot;:false,&quot;topImage&quot;:true,&quot;internalRedirect&quot;:null,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="grayscale photo of person holding glass" title="grayscale photo of person holding glass" srcset="https://images.unsplash.com/photo-1604948501466-4e9c339b9c24?crop=entropy&amp;cs=tinysrgb&amp;fit=max&amp;fm=jpg&amp;ixid=M3wzMDAzMzh8MHwxfHNlYXJjaHwxMHx8cmlza3xlbnwwfHx8fDE3NDk2MDc5MDN8MA&amp;ixlib=rb-4.1.0&amp;q=80&amp;w=1080 424w, https://images.unsplash.com/photo-1604948501466-4e9c339b9c24?crop=entropy&amp;cs=tinysrgb&amp;fit=max&amp;fm=jpg&amp;ixid=M3wzMDAzMzh8MHwxfHNlYXJjaHwxMHx8cmlza3xlbnwwfHx8fDE3NDk2MDc5MDN8MA&amp;ixlib=rb-4.1.0&amp;q=80&amp;w=1080 848w, https://images.unsplash.com/photo-1604948501466-4e9c339b9c24?crop=entropy&amp;cs=tinysrgb&amp;fit=max&amp;fm=jpg&amp;ixid=M3wzMDAzMzh8MHwxfHNlYXJjaHwxMHx8cmlza3xlbnwwfHx8fDE3NDk2MDc5MDN8MA&amp;ixlib=rb-4.1.0&amp;q=80&amp;w=1080 1272w, https://images.unsplash.com/photo-1604948501466-4e9c339b9c24?crop=entropy&amp;cs=tinysrgb&amp;fit=max&amp;fm=jpg&amp;ixid=M3wzMDAzMzh8MHwxfHNlYXJjaHwxMHx8cmlza3xlbnwwfHx8fDE3NDk2MDc5MDN8MA&amp;ixlib=rb-4.1.0&amp;q=80&amp;w=1080 1456w" sizes="100vw" fetchpriority="high"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a><figcaption class="image-caption">Photo by <a href="true">GR Stocks</a> on <a href="https://unsplash.com">Unsplash</a></figcaption></figure></div><p>I remember sitting across from a COO  who asked the question that haunts every CISO: "What exactly did we get for our $185M cybersecurity investment this year?" The implicit challenge was clear &#8211; no breach had occurred, so perhaps we were overfunded, or worse, hyping the risk to justify our existence.</p><p>This conversation encapsulates cybersecurity's fundamental framing problem. Unlike credit risk teams who evolved from "preventing bad loans" to "optimizing risk-return profiles for maximum profitable lending," cybersecurity remains trapped in a defensive mindset. We're stuck selling "cost avoidance" in a world where CFOs demand business value.</p><p>Economic theory suggests rational decision-makers calculate opportunity costs, but recent behavioural research reveals a more complex reality. In a landmark study "Opportunity Cost Neglect," Frederick et al. demonstrated across six experiments that consumers systematically fail to consider alternative uses of money until explicitly prompted, and even very subtle prompts are enough.<a class="footnote-anchor" data-component-name="FootnoteAnchorToDOM" id="footnote-anchor-1" href="#footnote-1" target="_self">1</a> Even when forced to deliberate extensively about purchase decisions, participants rarely spontaneously generated thoughts about outside goods they could buy instead until they were given that prompt. A prompt could be the mere mention of cost and when that goes first, a &#8220;frame&#8221; is set that encapsulates everything that is discussed from that point onwards. Professor Fredrick says:</p><blockquote><p>&#8220;A widely accepted precept in research on decision making is people&#8217;s passive acceptance of the &#8220;frame,&#8221; or characterization of the problem, they&#8217;re provided. This confers power on those who offer a frame. Decisions about whether some expenditure is &#8220;worth it&#8221; hinge on what the purchase is seen as displacing. Take the extra time to define that, and you can change the way your customers view your value proposition.&#8221;<a class="footnote-anchor" data-component-name="FootnoteAnchorToDOM" id="footnote-anchor-2" href="#footnote-2" target="_self">2</a></p></blockquote><p>This has profound implications for cybersecurity positioning. We've allowed ourselves to be framed as either:</p><ul><li><p>A "keeping-the-CEO-out-of-jail" expense for high-stakes industries</p></li><li><p>Or a compliance checkbox for everyone else</p></li></ul><p>But this framing is fundamentally wrong &#8211; and it's leaving massive competitive advantages on the table.</p><h3><strong>All other things aren't equal</strong></h3><p>The assumption that cybersecurity is purely a cost centre ignores a critical reality: security posture creates genuine business differentiation. When cybersecurity leaders position their investments solely as risk mitigation, they're competing on the wrong battlefield entirely.</p><p>Traditional cybersecurity pitches compare one security solution to another, usually emphasizing lower costs or better threat detection. This approach assumes that "keeping bad things from happening" is the primary value proposition. But this assumption is flawed because all security investments aren't equal &#8211; they enable vastly different business outcomes.</p><p>In high-stakes industries like &#8211; investment banking, robust cybersecurity isn't just about avoiding breaches &#8211; it's table stakes for customer trust. But even here, security teams fail to articulate how their investments enable premium pricing, faster regulatory approvals, or access to security-conscious institutional clients.</p><p>The key to positioning cybersecurity investments that deliver genuine business value is to differentiate the inherent opportunities that security posture creates from the "apparent" cost-avoidance benefits. It's perfectly valid strategy for cybersecurity teams to use opportunity costs as persuasive arguments &#8211; but to do that effectively, you need to understand not just your technical capabilities, but also what your stakeholders value most and what drives their decision-making process.</p><p>Consider the untapped competitive advantages that strong cybersecurity enables:</p><ul><li><p><strong>Revenue acceleration</strong>: Faster time-to-market when security is built-in rather than bolted-on</p></li><li><p><strong>Market expansion</strong>: Access to security-sensitive customers and markets that competitors can't serve</p></li><li><p><strong>Premium pricing</strong>: Privacy-conscious customers pay more for demonstrated data protection</p></li><li><p><strong>Operational efficiency</strong>: Reduced insurance costs, better contract terms, streamlined compliance</p></li><li><p><strong>Brand differentiation</strong>: Security as a competitive moat rather than a commodity requirement</p></li></ul><h3><strong>Framing your security value proposition</strong></h3><p>As I noted earlier, research into decision-making psychology highlights the importance of "framing" &#8211; how we characterize the problem our stakeholder faces and present our solution within that context. If you can effectively set the frame for business leaders, you can position your cybersecurity investments so they stand out as growth enablers rather than cost centres.</p><p>Professor Frederick's HBR article "<a href="https://hbr.org/2011/01/column-the-persuasive-power-of-opportunity-costs">The Persuasive Power of Opportunity Costs</a>" provides a masterclass in reframing expensive purchases. His analysis of De Beers' diamond marketing campaign shows how they repositioned expensive jewellery from a "luxury expense" to a simple choice: buy the diamond now or "redo the kitchen next year." By framing both options as expensive but inevitable purchases, De Beers normalized the luxury item and suggested the opportunity cost could simply be deferred. It's all in the "frame." This wasn't just clever advertising &#8211; it was strategic reframing backed by rigorous psychological research<a class="footnote-anchor" data-component-name="FootnoteAnchorToDOM" id="footnote-anchor-3" href="#footnote-3" target="_self">3</a>. Frederick's experiments reveal why traditional cybersecurity pitches often fail: when stakeholders are prompted to consider opportunity costs, they consistently shift toward cheaper options, this was shown to be true even with subtle cost-focused language. This research explains why leading with security expenses ('We need $10M for...') psychologically primes decision-makers to consider alternatives rather than approve investments. The solution is reframing security as business enablement before costs are even discussed. I.e. setting the frame early before discussion of costs.</p><p>Cybersecurity and IT leaders should apply these principles. Instead of presenting security investments as necessary evils, consider these reframes:</p><ul><li><p><strong>Traditional frame</strong>: "We need $10M for an endpoint security solution that detect and reduce malware infections by 80%" <strong>Opportunity cost frame</strong>: "Robust endpoint security enables our hybrid workforce strategy, avoiding the $50M cost of additional office space to support business expansion". <br>The endpoint security frame here is particularly compelling because it connects cybersecurity directly to a massive operational expense (real estate) that every executive understands. A $10M investment suddenly looks like a bargain against $50M in avoided office costs &#8211; and we're not just saving money, you're enabling a more flexible business model.</p></li><li><p><strong>Traditional frame</strong>: "We need advanced threat detection to identify APTs faster" <strong>Opportunity cost frame</strong>: "Proactive threat hunting positions us as the secure alternative to competitors dealing with public breaches, enabling premium pricing with security-conscious customers". <br>In this frame, we&#8217;re not just preventing losses; we're creating a competitive moat that justifies higher margins. When competitors are dealing with breach headlines and customer trust issues, The company&#8217;s proactive security becomes a sales tool.</p></li><li><p><strong>Traditional Frame: </strong>"Our SOC prevents 10,000 attacks monthly"<strong> Opportunity Frame: </strong>"Our security posture qualifies us as a Tier 1 supplier for Fortune 500 companies, opening doors to contracts our competitors can't even bid on".<br>The traditional frame's "10,000 attacks prevented" is actually meaningless to most executives. Sure, it looks good in a presentation deck but they don't know if that's good or bad, how it compares to industry benchmarks, or what business value it represents. It's just a big number that feels expensive to maintain. Here the opportunity frame immediately connects to what business leader understands: market access and competitive advantage.</p><p>For many listed companies and governments, supplier qualification is a binary gate &#8211; you either meet the requirements or you're don&#8217;t. No amount of price competition or product superiority matters if you don&#8217;t have a seat at the bidding table.</p></li></ul><p>These example also highlights how cybersecurity can establish what economists call "network effects" &#8211; the value of that investment increases exponentially as more prestigious clients demand higher security standards.</p><p>So here a SOC investment doesn't just protect against attacks; it becomes a credential that opens progressively more valuable market segments. And the end-point security frame implies that we can hire from a larger remotely located workforce and attract more diversified and skilled staff, and so on.</p><p>It's the difference between saying "We're really good at defence" versus "We have the ears of the CEO." One sounds like a cost centre protecting what you already have; the other sounds like a profit centre unlocking what you could have.</p><p>These examples collectively show that the most powerful reframes connect cybersecurity capabilities directly to revenue generation, market expansion, or competitive differentiation &#8211; making the business case self-evident rather than requiring complex risk calculations.</p><p>Briefly reverting to Frederick's research I need to point out that what constitutes a valid "frame" will differ dramatically between stakeholders. Your opportunity cost arguments will need to be adapted to each audience's specific priorities and psychology. The key thing that you need to think about is how do you get your stakeholders to stop thinking of security as cost and focus on how it enables and build business capabilities, revenue and differentiation.</p><h3><strong>Differential stakeholder engagement</strong></h3><p>So why do cybersecurity teams struggle to get buy-in while marketing teams with similar budgets sail through approval processes? Because marketing teams understand that different stakeholders require different value propositions for the same investment.</p><p>Consider how cybersecurity investments impact different business stakeholders:</p><p><strong>For the CEO</strong>: Security posture enables strategic opportunities</p><ul><li><p>"Our security maturity allows us to pursue acquisition targets that competitors can't due diligence properly"</p></li><li><p>"We can enter regulated markets that require demonstrated data protection capabilities"</p></li><li><p>"Security becomes a competitive moat &#8211; customers choose us because they trust us with sensitive data"</p></li></ul><p><strong>For the CFO</strong>: Security investments optimize financial performance</p><ul><li><p>"Strong cybersecurity posture reduces insurance premiums by 40% and enables better contract terms"</p></li><li><p>"Security-by-design reduces compliance costs across multiple frameworks"</p></li><li><p>"Our security posture commands premium pricing &#8211; we can charge 15% more than competitors for the same services"</p></li></ul><p><strong>For Sales Leaders</strong>: Security enables revenue growth</p><ul><li><p>"We can pursue enterprise customers who require SOC 2 Type II certification"</p></li><li><p>"Security certifications open government contracting opportunities worth $200M annually"</p></li><li><p>"While competitors deal with breach recovery, we're winning their security-conscious customers"</p></li></ul><p><strong>For Product Teams</strong>: Security accelerates innovation</p><ul><li><p>"Embedded security means faster time-to-market &#8211; no lengthy security reviews and simplified SBOMs"</p></li><li><p>"Privacy-by-design features become product differentiators"</p></li><li><p>"Security APIs enable integration with enterprise customer environments"</p></li></ul><p>Research validates that this multi-stakeholder approach is crucial. Frederick's studies found that individual differences in spending attitudes significantly affect how carefully you must frame initial presentations. Cost-conscious decision-makers naturally scrutinize all expenditures, making it essential to lead with opportunity value rather than costs. Growth-focused stakeholders, while more willing to invest, need explicit connection between security capabilities and business outcomes before any cost discussion begins.</p><p>This maps directly to cybersecurity contexts: heavily regulated industries with cost-conscious leadership require immediate opportunity framing to prevent cost-comparison thinking, while other sectors need clear business enablement messaging to avoid dismissing security as routine IT expense. In both cases, establishing the value frame before discussing costs is crucial to avoiding Frederick's documented preference shift toward cheaper alternatives.</p><p>Consider how De Beers crafted completely different messages for different audiences selling the same product. For men, they used direct, practical language: "It is never a good idea to keep a woman waiting", "There's never been a better time to invest in futures," and "This Christmas there will be more than three wise men.<a class="footnote-anchor" data-component-name="FootnoteAnchorToDOM" id="footnote-anchor-4" href="#footnote-4" target="_self">4</a>&#8221;  These messages positioned diamond purchases as smart investments and practical decisions.</p><p>For women, the campaign was far more subtle, aimed at redefining their relationship with diamonds entirely. Rather than positioning diamonds as gifts received, they framed them as personal choices: "It beckons me as I pass the store window&#8230;&#8221;, and &#8220;&#8230;I'm not usually that kind of girl, I take it home<a class="footnote-anchor" data-component-name="FootnoteAnchorToDOM" id="footnote-anchor-5" href="#footnote-5" target="_self">5</a>"</p><p>This dual approach to framing the purchase differently to both the purchaser and the recipient took advantage of their (subtly prepared) perceptions of what the ring meant &#8211; a sound investment, an investment into their shared love, and an expression of personal empowerment rather than dependence. The frame defines how they rationalise the ring and ignore or justify the opportunity cost even if that implies cognitive dissonance by linking it to emotions or suggestive logic.</p><p>For cybersecurity investments to gain traction, they must be perceived as enabling rather than adding to already constrained business budgets, and this frame must be set before we bring up costs. This requires framing security not as a grudge purchase, but as a strategic enabler of business goals that stakeholders already prioritize.</p><h3><strong>The path forward</strong></h3><p>The cybersecurity industry stands at an inflection point similar to where credit risk management was decades ago. Credit teams evolved from "preventing bad loans" to "optimizing risk-return profiles for maximum profitable growth." They developed sophisticated models that quantified previously invisible value creation. But these risk quantification models that I see evolving for Cybersecurity still have the same cost-focus flaw. They&#8217;ll just put a more precise dollar value on the risks we mitigate, on the threats that we eliminate and the DDoS attacked we negated. These quantification models still won&#8217;t show the business value or the opportunities that cyber stand to create. Unless the risk quantification model output is carefully used in budget meetings and investment discussions, these models will simply highlight the opportunity costs to the board before you&#8217;ve had the opportunity to set the frame.</p><p>That said, cybersecurity will eventually see the evolution Credit Risk experienced. Credit risk managers learned that better risk understanding didn't mean giving fewer loans to objectively safer customers &#8211; it meant giving <strong>more loans, more profitably,</strong> through accurate pricing of risk. <strong>That was the value creation from better analysis, not cost avoidance.</strong></p><p>Organizations that master this reframing will discover something remarkable: cybersecurity transforms from budget line item to a competitive weapon. While competitors treat security as a compliance checkbox, forward-thinking companies will use security posture to win customers, enter new markets, and command premium pricing.</p><p>The opportunity is enormous &#8211; but it requires honest self-reflection about how we currently position our value and what we might be leaving on the table. And it will require that CISOs become more business focused and commercially minded.</p><h3><strong>Questions for reflection:</strong></h3><ul><li><p>Consider your last three cybersecurity investment proposals. What frame did you use to present them? Were you asking stakeholders to accept costs or to recognize opportunities? Note: Risk reductions are not opportunities.</p></li><li><p>What business objectives is your CEO most focused on this quarter? How might your security capabilities enable, accelerate, or differentiate those initiatives?</p></li><li><p>If a competitor suffered a major breach tomorrow, what specific business advantages would your security posture create? How would you communicate those advantages to prospects and customers? And what would something like that be worth?</p></li><li><p>Which of your stakeholders are naturally cost-conscious versus growth-focused (i.e. more willing to invest but may need coaching to see security's business enabling potential)? Cost-conscious stakeholders need immediate opportunity-focused messaging to prevent cost-comparison thinking, while growth-focused stakeholders need explicit connection between security capabilities and business outcomes before any cost discussion begins.</p></li><li><p>What market opportunities, customer segments, or premium pricing strategies could your current security investments unlock that you haven't explicitly articulated to business leadership?</p></li></ul><p>The answer to the $185M question shouldn't focus on what we avoided, what risks we mitigated, or how much more resilient we are, unless those <em><strong>are</strong></em> the value creating differentiators. The answer should focus on the opportunities that create value so that we don&#8217;t end up in a situation where opportunity costs become the central discussion point. </p><p>We need to make the potential $40M cost-saving from a $10M cyber investment come across like great value. The organizations that answer this question most compellingly will transform cybersecurity from necessary cost to competitive advantage.</p><div class="footnote" data-component-name="FootnoteToDOM"><a id="footnote-1" href="#footnote-anchor-1" class="footnote-number" contenteditable="false" target="_self">1</a><div class="footnote-content"><p>Frederick, Shane; Novemsky, Nathan; Wang, Jing; Dhar, Ravi; and Nowlis, Stephen. "Opportunity Cost Neglect." Journal of Consumer Research, 36(4), 2009: 553-561. <a href="https://doi.org/10.1086/599764">https://doi.org/10.1086/599764</a></p></div></div><div class="footnote" data-component-name="FootnoteToDOM"><a id="footnote-2" href="#footnote-anchor-2" class="footnote-number" contenteditable="false" target="_self">2</a><div class="footnote-content"><p>Frederick, Shane. "<a href="https://hbr.org/2011/01/column-the-persuasive-power-of-opportunity-costs">The Persuasive Power of Opportunity Costs.</a>" Harvard Business Review, January-February 2011</p></div></div><div class="footnote" data-component-name="FootnoteToDOM"><a id="footnote-3" href="#footnote-anchor-3" class="footnote-number" contenteditable="false" target="_self">3</a><div class="footnote-content"><p>Remember that De Beers in the 1930&#8217;s invented the notion that an engagement ring should cost one month's salary, later evolving to two months and eventually three months salary, all based on Thorstein Veblen's 1924 theory of 'conspicuous consumption'. The frame presented to men was that the diamond was an investment in the love and happiness that he shared with his fianc&#233;e or wife &#8211; and because that love is priceless it renders the price of the diamond ring not really expensive at all. A pre-conditioning of the mind to the purchase made months in advance through suggestive marketing.</p></div></div><div class="footnote" data-component-name="FootnoteToDOM"><a id="footnote-4" href="#footnote-anchor-4" class="footnote-number" contenteditable="false" target="_self">4</a><div class="footnote-content"><p>These slogans demonstrate how De Beers positioned expensive jewellery purchases as rational, time-sensitive investment decisions rather than emotional luxury purchases &#8211; successfully appealing to male decision-making psychology even though diamonds are notoriously poor financial investments. <a href="https://www.theatlantic.com/magazine/archive/1982/02/have-you-ever-tried-to-sell-a-diamond/304575/">Edward Jay Epstein - The Atlantic, 1982</a></p></div></div><div class="footnote" data-component-name="FootnoteToDOM"><a id="footnote-5" href="#footnote-anchor-5" class="footnote-number" contenteditable="false" target="_self">5</a><div class="footnote-content"><p>The women-targeted campaign was revolutionary in repositioning diamonds from symbols of dependence ("he bought this for me") to symbols of personal agency ("I chose this for myself"), fundamentally changing the emotional relationship with luxury purchases. (idem)</p><p></p></div></div>]]></content:encoded></item><item><title><![CDATA[The Defensive Revolution]]></title><description><![CDATA[A New Defensive Mandate]]></description><link>https://www.tekk-talk.com/p/the-defensive-revolution</link><guid isPermaLink="false">https://www.tekk-talk.com/p/the-defensive-revolution</guid><dc:creator><![CDATA[Dennis Lindwall]]></dc:creator><pubDate>Mon, 09 Jun 2025 17:16:35 GMT</pubDate><enclosure url="https://substackcdn.com/image/fetch/$s_!VkDP!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F90b7a2b2-5c8e-42e5-87b8-06bc15f1ea83_1429x800.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!VkDP!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F90b7a2b2-5c8e-42e5-87b8-06bc15f1ea83_1429x800.png" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!VkDP!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F90b7a2b2-5c8e-42e5-87b8-06bc15f1ea83_1429x800.png 424w, https://substackcdn.com/image/fetch/$s_!VkDP!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F90b7a2b2-5c8e-42e5-87b8-06bc15f1ea83_1429x800.png 848w, https://substackcdn.com/image/fetch/$s_!VkDP!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F90b7a2b2-5c8e-42e5-87b8-06bc15f1ea83_1429x800.png 1272w, https://substackcdn.com/image/fetch/$s_!VkDP!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F90b7a2b2-5c8e-42e5-87b8-06bc15f1ea83_1429x800.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!VkDP!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F90b7a2b2-5c8e-42e5-87b8-06bc15f1ea83_1429x800.png" width="1429" height="800" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/90b7a2b2-5c8e-42e5-87b8-06bc15f1ea83_1429x800.png&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:800,&quot;width&quot;:1429,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:1125877,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:&quot;image/png&quot;,&quot;href&quot;:null,&quot;belowTheFold&quot;:false,&quot;topImage&quot;:true,&quot;internalRedirect&quot;:&quot;https://tekk.substack.com/i/165554110?img=https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F90b7a2b2-5c8e-42e5-87b8-06bc15f1ea83_1429x800.png&quot;,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!VkDP!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F90b7a2b2-5c8e-42e5-87b8-06bc15f1ea83_1429x800.png 424w, https://substackcdn.com/image/fetch/$s_!VkDP!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F90b7a2b2-5c8e-42e5-87b8-06bc15f1ea83_1429x800.png 848w, https://substackcdn.com/image/fetch/$s_!VkDP!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F90b7a2b2-5c8e-42e5-87b8-06bc15f1ea83_1429x800.png 1272w, https://substackcdn.com/image/fetch/$s_!VkDP!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F90b7a2b2-5c8e-42e5-87b8-06bc15f1ea83_1429x800.png 1456w" sizes="100vw" fetchpriority="high"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a></figure></div><p>In my foundational article "<em><strong><a href="https://www.tekk-talk.com/p/the-new-battlefield-ai-in-cyber-attacks">The New Battlefield</a></strong></em>," I identified three critical observations reshaping cybersecurity: the acceleration of AI capabilities, the transformation of threat profiles, and the evolution of defensive capabilities. This series has since explored how regulatory frameworks are creating new battlefields ("<em><strong><a href="https://www.tekk-talk.com/p/ai-regulations-around-the-world">AI Regulation &amp; Compliance: Mapping the Global Landscape</a>"</strong></em>) and how forward-thinking organizations are transforming compliance burdens into competitive advantages ("<em><strong><a href="https://www.tekk-talk.com/p/from-compliance-burden-to-cybersecurity">From Compliance Burden to Cybersecurity Edge</a></strong></em>"). In "<em><strong><a href="https://www.tekk-talk.com/p/the-offensive-ai-revolution">The Offensive AI Revolution</a></strong></em>," we examined how threat actors are weaponizing AI capabilities at unprecedented scale and sophistication. Now, we turn to the defensive revolution: how organizations are fundamentally reimagining security itself to counter these AI-enabled threats.</p><p>While we have noted several challenges throughout our journey of AI&#8217;s decisive impact on Cybersecurity, there is one core challenge that remains formidable &#8211; democratisation of advanced technical tools and capabilities are now in hands of everyone. As detailed in "<a href="https://www.tekk-talk.com/p/the-offensive-ai-revolution">The Offensive AI Revolution</a>," AI-enabled adversaries are no longer limited by the constraints of human expertise, time, or scale. We're witnessing a transformation from individual operators and static malware to coordinated, semi-autonomous attack systems that adapt, deceive, and learn in real-time. AI isn't just accelerating cyber threats &#8211; it's reshaping them entirely. Yet this narrative of defensive disadvantage obscures a critical reality: the same technological forces empowering attackers are simultaneously revolutionizing defensive capabilities.</p><p>This escalation creates an uncomfortable truth for defenders: our conventional security models which are built around predictable threats, human-speed incident response, and perimeter-based trust are increasingly becoming obsolete. In this new era of machine-powered intelligence and automation, the attacker iterates faster than your patch cycles, impersonates users with uncanny realism, and adjusts tactics before your detection rules can trigger. The democratization of AI tooling has further eroded the gap between advanced persistent threats and the broader criminal ecosystem, equipping more &#8220;junior&#8221; threat actors with capabilities that rival or even exceed those of well-resourced nation-states just a few years ago.</p><p>Yet this narrative of defensive disadvantage obscures a critical reality: the same technological forces empowering attackers are simultaneously revolutionizing defensive capabilities. This is what I call the "Asymmetric Mirror Effect" &#8211; when we focus intensely on breakthrough innovation from threat actors, we forget that defenders are also adapting in kind, staring back through the same technological lens. While adversaries leverage AI for autonomous exploitation and deepfake deception, defenders are deploying machine learning for predictive threat intelligence, behavioural anomaly detection, and real-time response automation. The question is not whether AI favours offense or defence, but which side adapts faster to the new operational reality.</p><p>But if &#8220;<a href="https://www.tekk-talk.com/p/the-offensive-ai-revolution">The Offensive AI Revolution</a>&#8221; outlined the scale of the threat, here we focus to the defensive renaissance now taking shape: how forward-thinking organizations are not just responding to AI-enabled threats, but getting ahead of them. This is not a story of despair, it's evidence of adaptation. Matching AI-enabled threats requires more than new tools; it demands a fundamental reimagining of security architecture &#8211; one that treats speed, adaptability, and continuous learning as core design principles rather than aspirational goals.</p><p>If the offensive AI revolution was the wake-up call, this is the response &#8211; the story of how security is being rebuilt at machine speed to fight a machine-speed threat. From adversarial machine learning and AI red teaming to behavioural authentication, real-time detection, and human-AI collaboration models, we'll map the strategic, operational, and technical shifts necessary to build truly AI-native defence. We&#8217;ll also examine the critical governance and accountability structures that must guide this transition, ensuring that in our race to automate, we don&#8217;t compromise trust, ethics, or oversight.</p><p>This transformation extends beyond traditional cybersecurity boundaries into fundamental questions of organizational readiness and economic strategy. As we'll explore, the most successful defensive programs are those that treat AI security not as a technology deployment but as an institutional capability &#8211; one that requires new skills, new team structures, and new approaches to measuring security effectiveness. The economic implications are equally profound: organizations that successfully implement AI-native defence gain sustainable competitive advantages, while those that delay face exponentially increasing costs as the threat landscape continues to evolve.</p><p>What follows is not a catalogue of tools or a checklist of best practices, but a discussion of the strategic framework needed for AI-native defence. I present a comprehensive approach that incorporates artificial intelligence not as a supplementary capability but as a fundamental operating principle. More importantly, I explore how organizations are transforming their security teams, processes, and cultures to operate effectively in an environment where both threats and defences evolve continuously at machine speed.</p><p>The transition to AI-native defence also brings significant governance challenges. Security leaders must navigate complex questions about algorithmic transparency, decision authority, and the appropriate balance between automation and human judgment. Yet these challenges also represent opportunities: organizations that master AI-native defence don't just protect themselves more effectively. They enable faster innovation, build stakeholder trust, and create competitive advantages in an increasingly digital-first economy. The following sections explore how to seize these opportunities while managing the inherent risks of this transformation.</p><p>While the full analysis of AI-native defence capabilities requires deep examination of technical architectures, organizational transformation, and implementation strategies &#8211; topics I explore comprehensively in the complete research &#8211; the strategic implications of this defensive evolution extend far beyond cybersecurity itself. They reveal fundamental shifts in how we must think about competition, trust, and human agency in an AI-driven world. These deeper insights demand our immediate attention, as they will shape not just our security postures but the very foundations of digital society.</p><h2>Mastering the AI Arms Race</h2><p>The defensive revolution in cybersecurity represents far more than a technological transition, it embodies a fundamental transformation in how we conceptualize security, intelligence, and trust in an increasingly AI-mediated world. While implementing AI-native defence requires comprehensive organizational evolution and sophisticated technical capabilities, the implications of this transformation extend well beyond cybersecurity itself to touch on questions of economic competition, social equity, human agency, and the very foundations of trust in digital society.</p><p>The changes we are witnessing transcend tactical improvements or technological upgrades. They represent profound shifts in organizational capability, competitive dynamics, and societal infrastructure that will shape not just how we defend against threats, but how we organize economies, distribute opportunities, and maintain human autonomy in an age of increasingly autonomous systems. The following reflections explore these deeper implications, addressing the fundamental transformations that emerge when we fully grasp what it means to build security for an AI-driven future.</p><h3>The Fundamental Paradigm Shift: From Security as Control to Security as Adaptation</h3><p>The most profound transformation that I&#8217;ve observed is not technological but conceptual: the obsolescence of security as a discipline of control. For decades, cybersecurity operated on the foundational assumption that threats could be catalogued, contained, and countered through increasingly sophisticated but fundamentally static defences. We built firewalls to establish perimeters, deployed signatures to identify known threats, and implemented policies to govern predictable behaviours. This control-based paradigm worked because both attackers and defenders operated within shared constraints of human cognition, manual processes, and linear progression.</p><p>Artificial intelligence shatters these assumptions entirely. When attacks can evolve faster than detection rules can be written, when threats can adapt their behaviour in real-time based on defensive responses, and when adversaries can operate at machine scale with minimal human oversight, the very concept of "controlling" security becomes not just inadequate but counterproductive. The attempt to maintain rigid defensive postures against adaptive adversaries creates brittleness rather than resilience, leaving organizations increasingly vulnerable to exactly the novel threats their static controls cannot anticipate.</p><p>What emerges instead is security as an adaptive capability &#8211; a living system that learns, evolves, and responds to threats through continuous interaction rather than predetermined rules. This shift represents more than technological evolution; it demands a fundamental reconceptualization of what security professionals do - a shift from static to dynamic. Rather than building stronger walls, we engineer immune systems. Rather than writing better rules, we cultivate systems that can recognize and respond to anomalies they've never encountered before. Rather than controlling threats, we develop the institutional capacity to adapt faster than those threats can evolve.</p><p>This paradigm shift introduces evolutionary pressure into cybersecurity that has never existed before. Organizations no longer compete merely on the sophistication of their defences but on their rate of adaptation to emerging threats. Success becomes measured not by the strength of current protections but by the speed at which defensive capabilities can evolve alongside changing attack vectors. Security effectiveness transforms from a function of defensive investment to a function of organizational learning velocity.</p><p>The implications here extend far beyond technical architecture to organizational culture, talent development, and strategic planning. Security teams must transition from guardians of established controls to researchers of adaptive defence, continuously experimenting, learning, and evolving their approaches. Cybersecurity is perhaps better placed to adapt due to the explosive change in cyber over the past ten years but the evolution must extend deeper into Technology teams where perhaps change has been slower and change is seen as an enemy of costs and legacy integration. Leadership must fund not just security tools but institutional capabilities for continuous transformation. Most critically, organizations must develop comfort with persistent uncertainty, recognizing that in an environment of continuous evolution, there is no final secure state &#8211; only the ongoing process of staying ahead of intelligent, adaptive adversaries.</p><p>This fundamental shift from control to adaptation represents perhaps the most significant evolution in cybersecurity thinking since the emergence of networked computing itself. Organizations that embrace this paradigm position themselves to thrive in an environment of continuous change, while those that cling to control-based models risk obsolescence regardless of their defensive investment levels.</p><h3>The Emergence of Cybersecurity as Competitive Intelligence</h3><p>A subtle but transformative shift emerges from my analysis: AI-native security systems do not merely protect organizational assets &#8211; they generate unprecedented intelligence about organizational operations, user behaviours, market dynamics, and competitive positioning that creates strategic advantages extending far beyond traditional security outcomes. This represents a fundamental reframing of cybersecurity's organizational value proposition, evolving from a necessary cost centre focused on risk mitigation to a strategic capability that actively drives business intelligence and competitive differentiation. And this is good.</p><p>Traditional security systems operated as largely passive monitoring infrastructures, generating alerts when predefined thresholds were exceeded but providing limited insight into normal operations or emerging patterns. AI-enhanced security platforms, by contrast, develop comprehensive behavioural models of organizational activity that reveal operational insights invisible to conventional business intelligence systems. These platforms understand user productivity patterns, identify process inefficiencies, detect emerging collaboration trends, and surface operational anomalies that may indicate not security threats but business opportunities or performance optimization potential.</p><p>Consider the strategic intelligence embedded in AI-driven security analytics: behavioural authentication systems that reveal optimal user experience patterns; network monitoring that identifies high-value collaboration relationships; data access analytics that expose information bottlenecks limiting organizational agility; and threat intelligence that provides early warning of industry-wide risks affecting competitive positioning. Organizations implementing comprehensive AI security gain what amounts to an organizational nervous system &#8211; continuous awareness of internal operations, external threat landscapes, and emerging market dynamics that inform strategic decision-making across business functions.</p><p>This intelligence advantage compounds over time as AI security systems accumulate institutional knowledge about organizational patterns, threat evolution, and operational optimization opportunities. Unlike traditional business intelligence that analyses historical data, security-derived intelligence operates in real-time, providing immediate insight into changing conditions, emerging risks, and strategic opportunities as they develop. Organizations effectively gain predictive capabilities about their own operations and competitive environment that extend well beyond security considerations.</p><p>Perhaps most significantly, this transformation repositions security professionals as organizational intelligence analysts rather than purely defensive specialists. Security teams become sources of strategic insight about operational efficiency, competitive threats, market dynamics, and organizational health that prove valuable across business functions. Chief Information Security Officers increasingly find themselves contributing to strategic planning, competitive analysis, and operational optimization discussions based on insights derived from security analytics.</p><p>The competitive implications are profound. Organizations that view AI security merely as enhanced protection forfeit the strategic intelligence these systems generate, while those that recognize and leverage the business intelligence embedded in security analytics gain sustained competitive advantages. In an increasingly AI-driven economy, the organizations with the most comprehensive and sophisticated security intelligence platforms possess superior situational awareness about their operations, competitive environment, and emerging opportunities.</p><p>This evolution fundamentally challenges traditional organizational boundaries between security, business intelligence, and strategic planning functions, suggesting that the most successful organizations will be those that integrate these capabilities into unified intelligence frameworks that serve both protective and strategic objectives simultaneously.</p><h3>The Democratization Paradox and the New Digital Divide</h3><p>One of the most striking paradoxes revealed in my analysis is how the same technological forces that democratize offensive capabilities simultaneously create an unprecedented stratification among defenders. While AI tools have dramatically lowered barriers for threat actors &#8211; enabling sophisticated attacks through readily available LLM assistants, deepfake-as-a-service platforms, and automated exploitation frameworks, etc. &#8211; the defensive response to these threats has created a new form of digital inequality that compounds exponentially over time.</p><p>This democratization paradox manifests in a troubling asymmetry: whereas AI-powered attack tools can be deployed with minimal organizational investment or expertise, effective AI-native defence requires substantial budgets, institutional transformation spanning technology infrastructure, human capital development, organizational processes, and cultural adaptation. A single threat actor with access to commercial AI tools can potentially compromise organizations that have invested millions in traditional security but lack AI-native defensive capabilities. Yet implementing comprehensive AI security requires sustained investment in specialized talent, adaptive architectures, and continuous capability development that many organizations cannot realistically achieve.</p><p>The result is the emergence of what I call a "security poverty trap" that creates widening gaps between organizational defensive capabilities. Organizations that successfully implement AI-native security gain compound advantages: their defensive systems learn and improve continuously, their security teams develop expertise in emerging threat vectors, and their institutional knowledge accumulates in ways that create sustained competitive advantages. Meanwhile, organizations relying on conventional security approaches face increasingly sophisticated AI-enabled threats with static, human-speed defences that become progressively less effective over time.</p><p>This digital divide operates across multiple dimensions simultaneously. Large enterprises with substantial resources can afford specialized AI security talent, advanced threat intelligence platforms, and comprehensive security architectures, while smaller organizations face the same AI-enabled threats with limited budgets and generalist security personnel with key skills being outsourced to third party security service providers which presents a different risk and increases the asymmetry. Technologically sophisticated industries develop AI security capabilities faster than traditional sectors, creating inter-industry vulnerability disparities. Geographic regions with strong AI research ecosystems and regulatory frameworks gain defensive advantages over areas lacking these institutional foundations.</p><p>Perhaps most concerning is the self-reinforcing nature of this divide. Organizations with advanced AI security capabilities attract top talent, generate better threat intelligence, and develop more sophisticated defensive innovations that further widen their advantage over less capable peers. The gap between AI-security leaders and laggards doesn't merely persist, it accelerates, creating winner-take-all dynamics in organizational security effectiveness that mirror broader patterns of technological inequality.</p><p>The implications extend beyond individual organizational risk to systemic vulnerabilities across entire economic sectors and supply chains. When AI-enabled attackers can target the weakest links in interconnected business ecosystems, even organizations with sophisticated defences become vulnerable to compromise through less capable partners, suppliers, or industry peers. The security poverty trap thus creates cascading risks that threaten entire sectors rather than just individual organizations.</p><p>This emerging digital divide in security capability represents one of the most significant challenges facing the cybersecurity community. Unlike previous technology transitions where organizations could gradually adopt new capabilities over extended timeframes, the velocity of AI-enabled threats compresses adaptation windows dramatically. Organizations that fall behind in AI security capability face not merely competitive disadvantages but existential risks from threats they lack the institutional capacity to detect, understand, or counter effectively.</p><p>The democratization paradox thus reveals a fundamental tension at the heart of the AI revolution: while artificial intelligence promises to democratize many capabilities, in cybersecurity it may create unprecedented concentrations of defensive advantage among organizations capable of mastering its complexities while leaving others increasingly vulnerable to democratized offensive capabilities they cannot adequately defend against.</p><h3>The Philosophical Question of Agency in Security</h3><p>Perhaps the most profound challenge emerging from my analysis transcends technology entirely to confront fundamental questions about human agency in security decision-making. As AI systems become increasingly autonomous in both attack and defence, we approach a threshold where the most critical security decisions, those determining organizational survival, data protection, and operational continuity occur at machine speed, and beyond the scope of human deliberation, oversight, or meaningful intervention. This reality forces us to grapple with philosophical questions that have no clear precedent: What level of autonomous decision-making are we comfortable delegating to systems we don't fully understand? How do we maintain meaningful human control over processes that operate faster than human cognition can follow?</p><p>The traditional cybersecurity paradigm assumed human decision-makers would evaluate threats, approve responses, and maintain accountability for security outcomes. Even highly automated systems operated within frameworks of human oversight, escalation procedures, and ultimate human authority over consequential actions. AI-native security fundamentally disrupts these assumptions by creating scenarios where effective defence requires decisions to be made in milliseconds rather than minutes, by systems capable of processing information volumes and complexity patterns that exceed human cognitive capacity.</p><p>Consider the philosophical implications of autonomous incident response systems that can isolate compromised networks, terminate user sessions, or quarantine critical systems without human approval. These are actions that may be essential for organizational protection but also carry significant business and operational consequences. Or AI-driven threat detection systems that flag individuals as security risks based on behavioural patterns invisible to human analysis, potentially affecting employment, performance evaluations, access privileges, and professional reputation through algorithmic decisions that resist straightforward explanation or appeal.</p><p>Even more challenging are the accountability questions that emerge when AI security systems make decisions that prove incorrect or harmful. Traditional frameworks of responsibility assume human decision-makers who can be held accountable for their choices, but algorithmic decision-making distributes responsibility across development teams, training data, organizational policies, and system architecture in ways that obscure clear lines of accountability. When an AI security system blocks legitimate business activity to prevent a false positive threat, or fails to detect a genuine attack due to adversarial evasion, who bears responsibility for the consequences?</p><p>The agency question becomes particularly acute in adversarial scenarios where AI defence systems must counter AI attack systems, potentially leading to machine-versus-machine conflicts that unfold entirely beyond human observation or control. These scenarios raise fundamental questions about the nature of security itself: Are we protecting human interests through autonomous systems, or have we created artificial agents pursuing objectives that may diverge from human values in ways we cannot predict or prevent?</p><p>The philosophical challenge extends to questions of transparency and explainability in AI security decisions. Many of the most effective AI systems operate as "black boxes" that produce accurate results through complex internal processes that resist human interpretation. Yet security decisions often require justification &#8211; to stakeholders, regulators, legal systems, or affected individuals &#8211; that demands explanations AI systems may be fundamentally incapable of providing in terms humans can meaningfully evaluate.</p><p>Perhaps most troubling is the potential for AI security systems to shape human behaviour in ways that optimize for security metrics rather than human flourishing. As these systems become more sophisticated at predicting and preventing security incidents, they may encourage or discourage human actions based on risk calculations that prioritize system security over individual autonomy, creativity, or dignity. The question becomes whether we are deploying AI to serve human security interests, or inadvertently subjecting human activity to algorithmic optimization for security outcomes.</p><p>These philosophical challenges demand more than technical solutions . They require fundamental deliberation about the kind of digital society we wish to create and the role we want human agency to play within AI-mediated security frameworks. The choices we make today about autonomous security systems will shape not just organizational protection but the broader relationship between human decision-making and algorithmic authority in domains that affect fundamental aspects of human life and liberty.</p><p>The emergence of autonomous security systems thus confronts us with questions that transcend cybersecurity to touch on core issues of human autonomy, algorithmic authority, and the appropriate balance between security and freedom in an AI-driven world. How we navigate these philosophical challenges will determine not just the effectiveness of our security systems but the kind of society these systems ultimately create and protect.</p><h3>Security as the Foundation of Trust in an AI-Driven Economy</h3><p>The most far-reaching insight from my study reveals cybersecurity's evolution beyond organizational protection to become the fundamental trust infrastructure upon which an AI-driven economy depends. As artificial intelligence increasingly mediates critical decisions affecting human welfare &#8211; from financial transactions and healthcare diagnoses to transportation routing and legal determinations &#8211; the security of these AI systems transcends traditional notions of data protection or business continuity to become a prerequisite for societal trust in AI-mediated interactions themselves.</p><p>This transformation redefines the stakes of cybersecurity from protecting individual organizations to preserving the integrity of economic and social systems that depend on AI reliability. When AI systems make lending decisions, autonomous vehicles navigate traffic, or medical AI assists in treatment recommendations, the security of these systems determines not just their immediate functionality but public confidence in AI-driven services across entire sectors. A successful attack on AI systems doesn't merely compromise individual organizations &#8211; it can undermine trust in entire categories of AI applications, potentially triggering broader rejection of beneficial AI technologies.</p><p>The trust implications operate across multiple interconnected layers of the digital economy. At the foundational level, trust in AI systems depends on confidence that they operate as intended, free from manipulation, corruption, or adversarial interference. This requires not just technical security but transparent, verifiable security practices that stakeholders can understand and validate. At the transactional level, trust emerges from consistent, reliable AI behaviour that meets user expectations and regulatory requirements over time. At the systemic level, trust depends on collective confidence that AI systems across an economy operate within appropriate governance frameworks that prioritize human welfare over purely algorithmic optimization.</p><p>Organizations that master AI-native security thus assume roles far beyond protecting their own assets &#8211; they become stewards of public trust in AI-enabled services. Financial institutions implementing secure AI for credit decisions don't merely protect their proprietary algorithms; they maintain confidence in AI-mediated financial services that enables broader economic participation. Healthcare organizations securing medical AI systems preserve trust in AI-assisted diagnosis and treatment that affects patient willingness to engage with AI-enhanced healthcare. Technology companies implementing robust AI security frameworks enable trust in AI platforms that supports innovation across entire business ecosystems.</p><p>This stewardship responsibility creates both opportunities and obligations that extend traditional cybersecurity mandates. Organizations with superior AI security capabilities can become trusted partners for stakeholders who require confidence in AI reliability &#8211; customers seeking AI-enhanced services, partners integrating AI capabilities, and regulators overseeing AI deployments. Yet this trust advantage comes with corresponding obligations to maintain security standards that preserve broader confidence in AI applications, not just immediate business interests.</p><p>The economic implications are profound. In an AI-driven economy, trust becomes a tradeable asset that organizations can build, lose, or transfer through their security practices. Organizations known for robust AI security can command premium pricing for AI-enabled services, attract partnerships with security-conscious stakeholders, and access markets requiring demonstrated AI reliability. Conversely, organizations with poor AI security records face not just immediate breach consequences but lasting damage to their ability to participate in AI-driven business relationships.</p><p>Perhaps most significantly, the concentration of AI security expertise among a relatively small number of organizations creates systemic risks to economic trust in AI applications. If only a subset of organizations can implement truly secure AI systems, broader economic benefits from AI adoption may be constrained by justified concerns about AI reliability and security among organizations lacking sophisticated security capabilities. This dynamic suggests that the democratization of AI security expertise becomes not just a competitive issue but an economic imperative for enabling widespread, beneficial AI adoption.</p><p>The emergence of cybersecurity as trust infrastructure also implies new forms of collective responsibility among organizations deploying AI systems. Just as financial institutions collectively maintain confidence in monetary systems through shared security standards and mutual oversight, organizations implementing AI systems may need to develop collaborative frameworks for maintaining public trust in AI reliability.</p><p>Ultimately, the evolution of cybersecurity into trust infrastructure represents a fundamental shift in how we understand the relationship between individual organizational security and broader social and economic welfare. In an AI-driven economy, cybersecurity becomes not just a business function but a societal utility, rather it evolves into essential infrastructure for maintaining the trust relationships that enable beneficial AI adoption across entire economies. Organizations that recognize and embrace this broader responsibility position themselves not just as secure AI adopters but as enablers of trustworthy AI deployment that benefits entire societies.</p><p>The organizations that recognize these transformations early &#8211; and begin adapting their security thinking beyond traditional protection models &#8211; will not only survive the AI arms race but help shape the trusted, intelligent, and equitable digital future we all depend on. The question is no longer whether AI will transform cybersecurity, but whether we'll master that transformation before it masters us.</p><p><strong>In the next article of this series</strong>, we'll move from strategic philosophy to practical implementation, exploring how organizations can engineer AI-native security from the ground up through Secure by Design principles. We'll examine the some of the technical architectures, governance frameworks, and organizational practices that transform security from a bolt-on protection layer into the foundational DNA of AI systems themselves. From threat modelling AI-specific attack vectors to implementing continuous behavioural monitoring, we'll provide a comprehensive blueprint for building security into the AI lifecycle rather than retrofitting it afterward.</p><p>The defensive revolution demands more than new tools, it requires fundamentally reimagining how we build, deploy, and govern AI systems. Organizations that master this transformation don't just defend more effectively; they create sustainable competitive advantages in an AI-driven economy.</p><p><em>As you consider your organization's AI security journey, which transformation resonates most: the shift from control to adaptation, the emergence of security as competitive intelligence, or the philosophical questions around autonomous decision-making? How are you preparing for a future where security must be engineered at machine speed? Share your reflections in the comments below.</em></p>]]></content:encoded></item><item><title><![CDATA[The Offensive AI Revolution]]></title><description><![CDATA[How AI Is Democratising Sophisticated Cyber Attacks]]></description><link>https://www.tekk-talk.com/p/the-offensive-ai-revolution</link><guid isPermaLink="false">https://www.tekk-talk.com/p/the-offensive-ai-revolution</guid><dc:creator><![CDATA[Dennis Lindwall]]></dc:creator><pubDate>Sat, 07 Jun 2025 20:27:27 GMT</pubDate><enclosure url="https://substackcdn.com/image/fetch/$s_!JsWB!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fcf0c3f80-5e6f-4217-b4a3-04ce22b3e8e8_1485x834.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!JsWB!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fcf0c3f80-5e6f-4217-b4a3-04ce22b3e8e8_1485x834.png" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!JsWB!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fcf0c3f80-5e6f-4217-b4a3-04ce22b3e8e8_1485x834.png 424w, https://substackcdn.com/image/fetch/$s_!JsWB!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fcf0c3f80-5e6f-4217-b4a3-04ce22b3e8e8_1485x834.png 848w, https://substackcdn.com/image/fetch/$s_!JsWB!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fcf0c3f80-5e6f-4217-b4a3-04ce22b3e8e8_1485x834.png 1272w, https://substackcdn.com/image/fetch/$s_!JsWB!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fcf0c3f80-5e6f-4217-b4a3-04ce22b3e8e8_1485x834.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!JsWB!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fcf0c3f80-5e6f-4217-b4a3-04ce22b3e8e8_1485x834.png" width="1456" height="818" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/cf0c3f80-5e6f-4217-b4a3-04ce22b3e8e8_1485x834.png&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:818,&quot;width&quot;:1456,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:1462494,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:&quot;image/png&quot;,&quot;href&quot;:null,&quot;belowTheFold&quot;:false,&quot;topImage&quot;:true,&quot;internalRedirect&quot;:&quot;https://tekk.substack.com/i/165428862?img=https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fcf0c3f80-5e6f-4217-b4a3-04ce22b3e8e8_1485x834.png&quot;,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!JsWB!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fcf0c3f80-5e6f-4217-b4a3-04ce22b3e8e8_1485x834.png 424w, https://substackcdn.com/image/fetch/$s_!JsWB!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fcf0c3f80-5e6f-4217-b4a3-04ce22b3e8e8_1485x834.png 848w, https://substackcdn.com/image/fetch/$s_!JsWB!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fcf0c3f80-5e6f-4217-b4a3-04ce22b3e8e8_1485x834.png 1272w, https://substackcdn.com/image/fetch/$s_!JsWB!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fcf0c3f80-5e6f-4217-b4a3-04ce22b3e8e8_1485x834.png 1456w" sizes="100vw" fetchpriority="high"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a></figure></div><p>In my &#8220;<a href="https://www.tekk-talk.com/p/the-new-battlefield-ai-in-cyber-attacks">New Battlefield</a>&#8221; article, I identified three critical observations reshaping cybersecurity: the acceleration of AI capabilities, the transformation of threat profiles, and the evolution of defensive capabilities. Here I want to deep-dive into how artificial intelligence is fundamentally altering who can launch sophisticated attacks and what those attacks look like.</p><p>The numbers tell a stark story. Google's Mandiant research conducted in 2023 shows that time-to-exploit for vulnerabilities had already plummeted from 63 days in 2018 to just five days. Given the accelerating pace of AI development since that study, current timelines are likely even shorter &#8211; with AI-powered attackers already observed weaponising critical flaws within 48 hours of disclosure<a class="footnote-anchor" data-component-name="FootnoteAnchorToDOM" id="footnote-anchor-1" href="#footnote-1" target="_self">1</a>. But speed is only part of the transformation. What's truly revolutionary is how AI has democratised capabilities that were once the exclusive domain of elite threat actors.</p><div class="subscription-widget-wrap-editor" data-attrs="{&quot;url&quot;:&quot;https://www.tekk-talk.com/subscribe?&quot;,&quot;text&quot;:&quot;Subscribe&quot;,&quot;language&quot;:&quot;en&quot;}" data-component-name="SubscribeWidgetToDOM"><div class="subscription-widget show-subscribe"><div class="preamble"><p class="cta-caption">Thanks for reading TEKK Talk! Subscribe for free to receive new posts and support my work.</p></div><form class="subscription-widget-subscribe"><input type="email" class="email-input" name="email" placeholder="Type your email&#8230;" tabindex="-1"><input type="submit" class="button primary" value="Subscribe"><div class="fake-input-wrapper"><div class="fake-input"></div><div class="fake-button"></div></div></form></div></div><h1>The Great Democratization: From Elite Skills to Accessible Tools</h1><p><em><strong>"The script kiddie is dead. Long live the prompt-powered operator."</strong></em></p><p>This stark reality captures the most significant shift in the threat landscape: the dramatic lowering of barriers to sophisticated attack capabilities. Not all democratization is good news. Where advanced persistent threats once required teams of skilled hackers working for weeks, today's adversaries leverage AI as a force multiplier that compresses both time and expertise requirements. Palo Alto Networks&#8217; Unit 42 are able to show that GPT-powered simulated ransomware campaigns are able to compress multi-stage attacks from days to minutes<a class="footnote-anchor" data-component-name="FootnoteAnchorToDOM" id="footnote-anchor-2" href="#footnote-2" target="_self">2</a>.</p><h2>The New Attack Economics</h2><p>The economics of cybercrime have fundamentally shifted. Traditional attack models required substantial investments in human talent, specialised tools, and operational infrastructure. AI has inverted this equation. Sophisticated capabilities are now available as services, accessible through natural language interfaces, and deployable by operators with minimal technical background.</p><p>Consider the progression we've witnessed:</p><ul><li><p><strong>2020</strong>: Advanced social engineering required deep research, writing skills, and psychological manipulation expertise</p></li><li><p><strong>2022</strong>: ChatGPT enables automated, contextually aware phishing at scale</p></li><li><p><strong>2024</strong>: Platforms like WormGPT and EvilGPT commercialise AI-assisted attack workflows in underground markets</p></li></ul><p>This isn't just tool evolution &#8211; it is a fundamental restructuring of the threat actor ecosystem. The barrier between sophisticated nation-state capabilities and commodity cybercrime continues to erode.</p><h1>LLM-Powered Exploitation Pipelines</h1><p>Large language models have become the Swiss Army knife of modern cyber operations. Recent research with LLMSmith &#8211; this is a toolchain that systematically discovers and exploits vulnerabilities in LLM-integrated applications &#8211; demonstrates this reality. The study led to thirteen CVEs and successful exploitation of sixteen real-world applications using only natural language prompts<a class="footnote-anchor" data-component-name="FootnoteAnchorToDOM" id="footnote-anchor-3" href="#footnote-3" target="_self">3</a>.</p><p>These models excel across the entire attack lifecycle:</p><p><strong>Reconnaissance and Target Profiling: </strong>Adversaries feed LLMs scraped social media data, corporate information, and public records. The models generate detailed psychological profiles and contextually appropriate attack vectors tailored to specific industries, roles, or even individuals. This represents a striking democratization of capabilities. Threat actors can now perform the kind of sophisticated psychological profiling and behavioural targeting that required the resources and expertise of organizations like Cambridge Analytica or AggregateIQ just a few years ago, but with AI assistance accessible to anyone with basic technical skills..</p><p><strong>Code Analysis and Reverse Engineering</strong>: Attackers upload obfuscated PowerShell scripts, complex binary decompilation, or proprietary application logic. They receive interpretations, vulnerability assessments, and exploit suggestions that previously required years of specialised training.</p><p><strong>Automated Vulnerability Research: </strong>By parsing technical documentation, GitHub repositories, CVE databases, CISA's Known Exploited Vulnerabilities (KEV) catalogue, and Rapid7's Vulnerability &amp; Exploit Database (to mention but a few) LLMs accelerate the &#8216;<em>research-to-weaponization&#8217;</em> pipeline from weeks to hours. This democratization is amplified by the increasing transparency of vulnerability disclosure. While initiatives like KEV and public exploit databases serve legitimate defensive purposes, they also provide threat actors with comprehensive roadmaps of proven attack vectors. LLMs can now cross-reference these authoritative sources at <em>machine speed</em> to identify patterns, suggest attack vectors, and generate proof-of-concept exploits with unprecedented efficiency. This is an acceleration that fundamentally changes the economics of vulnerability research from a time-intensive, expert-driven process to an automated, scalable capability.</p><p>Threat actors and their ecosystem have already commercialised these capabilities. While platforms like WormGPT are essentially wrappers around existing LLMs with minimal custom functionality, they represent clear market demand and demonstrate how the combination of AI assistance and transparent vulnerability data creates a perfect storm for democratised exploitation.</p><h1>Beyond Phishing: The New Art of Deception</h1><p>Social engineering has evolved from broad, poorly targeted campaigns to psychologically sophisticated, real-time manipulation that adapts to target responses. This transformation represents one of the most immediate and dangerous applications of AI in offensive operations.</p><p>The evolution of AI-generated audio and video deepfakes into social engineering represents one of the most immediate threats on the horizon.</p><h2>Hyper-realistic Impersonation at Scale</h2><p>The cases are numerous and financially devastating:</p><ul><li><p>Early 2024: Attackers used a deepfake CFO during a Zoom call to defraud a firm of $25 million</p></li><li><p>A manager in Hong Kong was manipulated via deepfake voice into wiring $35 million, supported by follow-up emails mimicking legal counsel</p></li><li><p>Wall Street Journal journalist successfully fooled her own bank's voice authentication using a cloned version of her voice</p></li></ul><p>These aren't isolated incidents &#8211; they represent a new operational reality where voice and video can no longer serve as trusted identity verification. <br><br>Although new voice cloning detection tools are being developed, there are two key challenges we need to consider:<br>1. <strong>The &#8220;Red Queen&#8221; effect</strong>: As voice clone detection technology evolves, so do the offensive tools, techniques and tactics, creating a perpetual arms race where defenders must run faster just to stay in place.<br>2. <strong>The &#8220;legacy drag</strong>&#8221;: The weight of legacy systems and processes slows adoption of new technologies, especially when organizations recently invested in early versions of defensive technologies or believe existing solutions provide equivalent protection <br><br>The dilemma this presents, is that organizations face a critical window where attack capability has matured faster than defensive deployment. Unlike traditional technology investments that could be planned over multi-year cycles, the velocity of AI-enabled deception demands immediate action. The economic damage is already real and measurable, but the institutional response mechanisms (from procurement cycles, risk assessment frameworks, to technology adoption processes) is calibrated for a slower threat evolution timeline - acutely underestimating the real risks.</p><h2>The Three Pillars of AI-Enhanced Social Engineering</h2><p><strong>Comprehensive Target Analysis</strong>: Modern attackers employ AI to conduct unprecedented reconnaissance. They analyse social media profiles, corporate biographies, public presentations, and academic publications to build detailed psychological profiles. This data fuels generative systems that produce spear-phishing messages precisely aligned to the target's communication style, industry concerns, and emotional triggers.</p><p><strong>Real-Time Adaptation</strong>: Unlike traditional phishing campaigns that rely on static templates, AI-driven operations adapt their messaging based on target responses. The system adjusts tone, urgency, and approach to overcome suspicion, creating a conversational dynamic that feels authentically human.</p><p><strong>Multi-Modal Deception</strong>: Advanced speech synthesis tools like ElevenLabs enable real-time voice cloning with minimal sample data. Combined with deepfake video technology and LLM-generated scripts that mirror internal terminology and communication styles, attackers can deploy synthetic personas across multiple sensory channels tailored to specific victims and business contexts.</p><p>The psychological impact proves particularly effective in trusted business environments where authority and urgency intersect: when "<em>the CEO</em>" calls during a board meeting demanding immediate wire transfers, when "legal counsel" emails urgent settlement instructions, or when &#8220;<em>the CFO</em>" appears on video requesting emergency fund movements during supposed acquisition talks. This is equally relevant in high-pressure business environments (M&amp;A discussions, crisis management, etc.) and in in processes where velocity trumps verification &#8211; such as time-sensitive contract approvals where rigid authentication procedures are viewed as obstacles to decisive action.</p><h1>The Stealth Revolution: Behavioural Mimicry and Adaptive Malware</h1><p>Post-compromise operations have been revolutionised by machine learning applications that analyse and replicate legitimate user behaviour. Traditional intrusion detection relied on identifying unusual patterns &#8211; but what happens when the attacker looks exactly like a legitimate user?</p><h2>Invisible Through Normality</h2><p>Advanced persistent threats now operate with unprecedented stealth capabilities:</p><ul><li><p><strong>Precision Timing</strong>: Access occurs during an organization's peak operational hours, precisely matching normal work patterns to avoid time-based anomaly detection</p></li><li><p><strong>Role-Appropriate Activity</strong>: Attackers mirror legitimate user access patterns to files, applications, and networks based on carefully inferred job responsibilities and typical workflow patterns</p></li><li><p><strong>Disciplined Lateral Movement</strong>: Rather than aggressively spreading through networks, sophisticated actors constrain their activities to systems and resources consistent with the compromised identity they're leveraging</p></li></ul><p>Research confirms the effectiveness of this approach<a class="footnote-anchor" data-component-name="FootnoteAnchorToDOM" id="footnote-anchor-4" href="#footnote-4" target="_self">4</a>. Studies demonstrate that malware using polymorphic execution strategies such as distributing behaviour across multiple threads and adapting actions based on system context, is able to reduce detection accuracy in behavioural classifiers by up to 50%. Although wide-spread adoption by threat actors is yet to be observed), this approach is already documented and viable.</p><h2>The Evolution of Malicious Code</h2><p>Malware itself is undergoing fundamental transformation through AI enhancement. Traditional signature-based detection faces increasingly sophisticated evasion:</p><p><strong>Generative Polymorphism</strong>: AI models produce malware variants that modify their signatures with each delivery, making hash-based or pattern-matching defences obsolete.</p><p><strong>Environment-Aware Execution</strong>: Advanced specimens detect sandbox analysis environments and deliberately suppress malicious behaviour during automated scans, only revealing true functionality in live production settings.</p><p><strong>Context-Sensitive Activation</strong>: The most sophisticated malware incorporates dynamic decision-making about when, where, and how to activate &#8211; deferring execution until specific conditions are met, such as privileged user login or sensitive application launch.</p><p>Perhaps most concerning is recent research applying Generative Adversarial Networks (GANs) to malware creation. One standout example is the EGAN framework &#8211; short for &#8216;<strong>Evolutional GAN</strong>&#8217; &#8211; which merges GANs with Evolution Strategies to generate ransomware variants that appear benign to antivirus engines while remaining fully functional. In essence, EGAN teaches malware how to mutate intelligently, evolving in real time to sidestep detection without breaking its core payload<a class="footnote-anchor" data-component-name="FootnoteAnchorToDOM" id="footnote-anchor-5" href="#footnote-5" target="_self">5</a>.</p><p>While EGAN and similar techniques represent the current frontier of AI-enhanced malware, they also point toward an even more concerning future: one where these experimental capabilities mature into operational weapons deployed at scale.</p><h1>Emerging Threats: The Next Wave of AI-Enabled Attacks</h1><p>The stealth revolution in malware represents just one dimension of how AI is reshaping offensive capabilities. The most dangerous evolution isn't simply automation of existing attacks, it's the emergence of capabilities that redefine how cyber operations are conceived and executed.</p><p>Several advanced threats remain just over the horizon, grounded in current research but not yet broadly operational.</p><h2>Autonomous Attack Systems</h2><p>We're witnessing the early emergence of autonomous attack systems: agentic AI frameworks capable of pursuing high-level objectives like reconnaissance, lateral movement, and data exfiltration with minimal human oversight.</p><p>As I already mentioned, the recent analysis by Palo Alto Networks' Unit 42 demonstrates this trajectory &#8211; AI-assisted attacks have reduced time-to-exfiltration in simulated ransomware campaigns by up to 100x, compressing multi-stage operations from days to minutes.</p><p>Current LLM limitations include robustness issues, memory constraints, and tool integration challenges. But we should remember that these are engineering problems, not conceptual barriers. Security researchers have already demonstrated agents autonomously escalating privileges using real-time sourced exploits and adjusting tactics to evade defensive measures.</p><p>What distinguishes autonomous systems from traditional automation is intent. These are adaptive agents capable of responding to feedback, altering strategies, and persisting toward objectives without continuous human input.</p><h2>AI-Generated Zero-Day Discovery</h2><p>The discovery of zero-day vulnerabilities is transitioning from elite human talent to automated AI systems. While no confirmed cases exist (yet) of fully autonomous AI discovering and exploiting unknown zero-days in production environments, the foundational components are rapidly maturing.</p><p>Machine learning models trained on source code repositories, binary execution patterns, and historical CVE data demonstrate increasing ability to:</p><ul><li><p>Detect insecure coding practices and control flow weaknesses</p></li><li><p>Suggest plausible exploits for poorly sanitised inputs</p></li><li><p>Analyse binary code and simulate program behaviour to identify exploitable states</p></li></ul><p>The risk extends beyond acceleration to accessibility. Where human zero-day discovery required specialised skills and patience, AI lowers these barriers significantly. Once embedded in open-source frameworks or adversarial toolkits, these capabilities could democratise zero-day discovery across the broader threat actor ecosystem.</p><p>These evolving offensive capabilities create fundamental asymmetries that challenge every assumption underlying traditional defensive strategies.</p><h1>Strategic Implications: The Defender's Dilemma</h1><p>The rise of AI-enhanced offensive capabilities creates fundamental asymmetries that challenge every assumption underlying traditional defensive strategies.<em> </em>I will dissect this problem in an in-depth article but for context here, regarding the Offensive Revolution that AI is presenting, security leaders must grapple with several paradigm shifts:</p><h2>From Human-Speed to Machine-Speed Threats</h2><p>AI enables adversaries to compress vulnerability-to-exploitation lifecycles from weeks to hours. These attacks move faster than human-cantered response mechanisms can react, regardless of budget, headcount, or experience level.</p><p>This velocity advantage undermines traditional incident response frameworks, governance processes, and compliance models designed for slower, more predictable threats. Organizations still reliant on manual approval chains and post-incident analysis find themselves defending in the past tense.</p><h2>From Skill Barriers to Tool Availability</h2><p>The democratization of sophisticated attack capabilities has fundamentally altered threat modelling assumptions. Capabilities once requiring elite technical talent are increasingly accessible through AI-enhanced tooling &#8211; much of which is open-source or actively commercialised in underground markets.</p><p>This expansion breaks legacy risk assessment models. Sophistication is no longer tied to adversary skill level. It's a product of tool availability and AI accessibility. Organizations must now assume that any motivated threat actor can potentially deploy advanced techniques previously associated with nation-state actors.</p><h2>From Discrete Events to Persistent Deception</h2><p>AI-driven deepfakes, behavioural mimicry, and context-aware social engineering enable a transition from sporadic, identifiable intrusion attempts to persistent, embedded manipulation that operates continuously within organizational environments.</p><p>Traditional anomaly detection, security awareness training, and trust-based controls prove increasingly vulnerable when attackers can convincingly simulate legitimate users, executives, and business partners across multiple interaction channels.</p><h2>The Economics of the New Threat Landscape</h2><p>The cost structure of cybercrime has fundamentally shifted. Traditional attack economics required substantial upfront investment in human talent and specialised tooling. AI has inverted this equation &#8211; sophisticated capabilities are now available as low-cost services, dramatically expanding the potential threat actor population.</p><p>Meanwhile, the potential impact of successful attacks continues to escalate. Organizations face not just direct financial losses but regulatory penalties, reputational damage, and operational disruption that can persist for years following a significant breach.</p><h2>The Imperative for Strategic Transformation</h2><p>The convergence of these trends demands more than incremental security improvements &#8211; it requires fundamental transformation in how organizations approach cyber defence. For this to be effective, three critical shifts are necessary:</p><p><strong>From Reactive to Predictive</strong>: Security programs must anticipate AI-enabled attack techniques before they appear in production environments. This includes AI-specific red teaming, adversarial simulation, and investment in detection systems that can match attacker speed and adaptability.</p><p><strong>From Static to Adaptive</strong>: Traditional security architectures built around fixed controls and known patterns must evolve toward dynamic systems capable of detecting and responding to novel threats in real-time.</p><p><strong>From Individual to Collective</strong>: The democratization of advanced attack capabilities means no single organization can maintain comprehensive visibility across the threat landscape. Effective defence increasingly requires collaborative approaches that share intelligence, techniques, and countermeasures across organizational boundaries.</p><p>Organizations that delay this transformation risk falling permanently behind adversaries who evolve with every AI breakthrough. The advantage won't go to the most resourced teams &#8211; it will go to those who can anticipate intent and model threats before they materialise.</p><h1>The New Reality: Speed, Scale, and Strategic Response</h1><p>The AI-driven transformation of cyber offense is no longer theoretical, it is not hyperbole and it&#8217;s not science-fiction &#8211; it is operational and accelerating. Today's threat actors aren't merely augmenting traditional tactics with AI; they're reshaping the attack landscape entirely through machine-speed operations, scalable deception, and increasingly autonomous offensive capabilities.</p><p>This shift transcends specific vulnerabilities or attack techniques. We're witnessing the emergence of a new class of threat: faster, more precise, harder to detect, and accessible to a dramatically expanded population of potential adversaries.</p><p>The foundational assumptions that guided cybersecurity for decades are rapidly eroding. The barrier between sophisticated state-sponsored capabilities and commodity cybercrime continues to collapse. The attack surface is expanding and mutating faster than traditional security architectures can adapt.</p><p>For security leaders, this creates an urgent imperative: develop strategic agility &#8211; the institutional capacity to anticipate deception, operate through compromise, and respond at machine tempo. This isn't simply a technology upgrade; it's an organizational transformation that must occur at the pace of AI advancement rather than traditional enterprise change cycles.</p><p>The organizations that will thrive are those that treat security not as a fixed state but as a dynamic capability &#8211; one that evolves alongside both the threats they face and the AI technologies that enable those threats.</p><p><strong>In next article of this series</strong>, we'll explore how defenders are rising to meet these challenges &#8211; building AI-augmented security operations, implementing adversarial machine learning countermeasures, and developing the human-AI collaboration models necessary to counter threats that think for themselves.</p><p>The AI arms race is already underway. The question now is not whether these threats will materialise, but how quickly organizations can develop the adaptive capabilities necessary to defend against them.</p><p><em>What aspects of AI-driven offensive capabilities concern you most? How is your organization preparing for threats that evolve faster than traditional security measures? Share your perspectives in the comments below.</em></p><div class="footnote" data-component-name="FootnoteToDOM"><a id="footnote-1" href="#footnote-anchor-1" class="footnote-number" contenteditable="false" target="_self">1</a><div class="footnote-content"><p><a href="https://cloud.google.com/blog/topics/threat-intelligence/time-to-exploit-trends-2023">https://cloud.google.com/blog/topics/threat-intelligence/time-to-exploit-trends-2023</a></p></div></div><div class="footnote" data-component-name="FootnoteToDOM"><a id="footnote-2" href="#footnote-anchor-2" class="footnote-number" contenteditable="false" target="_self">2</a><div class="footnote-content"><p><a href="https://www.paloaltonetworks.com/engage/unit42-2025-global-incident-response-report">2025 Global Incident Response Report</a></p></div></div><div class="footnote" data-component-name="FootnoteToDOM"><a id="footnote-3" href="#footnote-anchor-3" class="footnote-number" contenteditable="false" target="_self">3</a><div class="footnote-content"><p>Demystifying RCE Vulnerabilities in LLM-Integrated Apps, Tong Liu, et.al, 2024, <a href="https://doi.org/10.1145/3658644.3690338">https://doi.org/10.1145/3658644.3690338</a></p></div></div><div class="footnote" data-component-name="FootnoteToDOM"><a id="footnote-4" href="#footnote-anchor-4" class="footnote-number" contenteditable="false" target="_self">4</a><div class="footnote-content"><p>Hardening behavioural classifiers against polymorphic malware: An ensemble approach based on minority report, Lara Mauri and Ernesto Damiani, 2024, page 15, <a href="https://doi.org/10.1016/j.ins.2024.121499">https://doi.org/10.1016/j.ins.2024.121499</a></p></div></div><div class="footnote" data-component-name="FootnoteToDOM"><a id="footnote-5" href="#footnote-anchor-5" class="footnote-number" contenteditable="false" target="_self">5</a><div class="footnote-content"><p>Evolutional GAN for Ransomware Evasion, Daniel Commey, et al. 2023: <a href="https://doi.org/10.1109/LCN58197.2023.10223320">https://doi.org/10.1109/LCN58197.2023.10223320</a></p><p></p></div></div>]]></content:encoded></item><item><title><![CDATA[From Compliance Burden to Cybersecurity Edge]]></title><description><![CDATA[Leveraging AI Regulations as a Differentiator]]></description><link>https://www.tekk-talk.com/p/from-compliance-burden-to-cybersecurity</link><guid isPermaLink="false">https://www.tekk-talk.com/p/from-compliance-burden-to-cybersecurity</guid><dc:creator><![CDATA[Dennis Lindwall]]></dc:creator><pubDate>Tue, 25 Mar 2025 20:56:28 GMT</pubDate><enclosure url="https://substackcdn.com/image/fetch/$s_!b4AJ!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fdc4b4b6f-907b-4c87-bc34-cb69e3ec4f75_1024x608.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!b4AJ!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fdc4b4b6f-907b-4c87-bc34-cb69e3ec4f75_1024x608.png" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!b4AJ!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fdc4b4b6f-907b-4c87-bc34-cb69e3ec4f75_1024x608.png 424w, https://substackcdn.com/image/fetch/$s_!b4AJ!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fdc4b4b6f-907b-4c87-bc34-cb69e3ec4f75_1024x608.png 848w, https://substackcdn.com/image/fetch/$s_!b4AJ!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fdc4b4b6f-907b-4c87-bc34-cb69e3ec4f75_1024x608.png 1272w, https://substackcdn.com/image/fetch/$s_!b4AJ!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fdc4b4b6f-907b-4c87-bc34-cb69e3ec4f75_1024x608.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!b4AJ!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fdc4b4b6f-907b-4c87-bc34-cb69e3ec4f75_1024x608.png" width="1024" height="608" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/dc4b4b6f-907b-4c87-bc34-cb69e3ec4f75_1024x608.png&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:&quot;normal&quot;,&quot;height&quot;:608,&quot;width&quot;:1024,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:null,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:null,&quot;href&quot;:null,&quot;belowTheFold&quot;:false,&quot;topImage&quot;:true,&quot;internalRedirect&quot;:null,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!b4AJ!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fdc4b4b6f-907b-4c87-bc34-cb69e3ec4f75_1024x608.png 424w, https://substackcdn.com/image/fetch/$s_!b4AJ!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fdc4b4b6f-907b-4c87-bc34-cb69e3ec4f75_1024x608.png 848w, https://substackcdn.com/image/fetch/$s_!b4AJ!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fdc4b4b6f-907b-4c87-bc34-cb69e3ec4f75_1024x608.png 1272w, https://substackcdn.com/image/fetch/$s_!b4AJ!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fdc4b4b6f-907b-4c87-bc34-cb69e3ec4f75_1024x608.png 1456w" sizes="100vw" fetchpriority="high"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a><figcaption class="image-caption">Compliance fuels cybersecurity's strategic edge</figcaption></figure></div><p></p><p>&#8216;Compliance' and 'competitive advantage' rarely appear in the same sentence for cybersecurity leaders. A growing number of organizations are transforming regulatory burdens into strategic weapons. As <a href="https://www.tekk-talk.com/p/ai-regulations-around-the-world">global regulatory AI frameworks</a> become increasingly complex and divergent, the choices for organizations are limited: treat compliance as a box&#8211;ticking exercise or embrace it as a strategic lever that can bolster security operations, amplify trust, and create tangible competitive advantages. I make no secret that I am about to advocate the latter.</p><p>Far from being just another overhead cost, regulatory compliance can offer the cyber leadership a unique chance to strengthen their security posture while proactively managing evolving risks. Regulations that demand explainability, transparency, and fairness may initially seem restrictive, but these mandates can drive clarity, sharpen accountability, and enhance resilience across the organization.</p><p>Here I will attempt to unravel how visionary organizations are already harnessing compliance requirements to build stronger, more adaptive security frameworks. By transforming regulatory mandates into operational opportunities, security leaders can leverage compliance as a business enabler, not just a legal obligation. In the sections ahead, I&#8217;ll explain how strategic compliance can become your organization's cybersecurity edge &#8211; turning complexity into capability, risk into resilience, and obligation into advantage.</p><p>The stakes are tangible: organizations that treat compliance as a mere checkbox exercise risk not only regulatory penalties but also missed opportunities to strengthen their security posture. Those who approach regulation strategically; however, can transform compliance from a burden into a catalyst for trust, transparency, and competitive advantage.</p><h1>Navigating the Compliance Maze &#8211; 3 Key Challenges Facing Security Leaders</h1><p>Deploying AI-powered cybersecurity across global jurisdictions means grappling with regulations as complex and dynamic as the threats themselves. While every region introduces its unique blend of rules &#8211; whether the EU's strict AI Act, America's fragmented sectoral regulations, or China's state&#8211;driven AI governance &#8211; several key compliance challenges universally confront cybersecurity teams. Understanding these obstacles is the critical first step in transforming regulatory compliance from a perceived burden into a strategic asset. </p><p>Three core issues &#8211; Explainability, Cross-border Data Flows, and Third-party AI Risk Management &#8211; consistently emerge as key themes across these different regulatory frameworks. Not only are they foundational to achieving compliance in multiple jurisdictions, but they're also uniquely impactful in shaping the operational effectiveness and resilience of AI&#8211;driven cybersecurity programs. Failing to address any one of these key challenges can expose organizations to substantial regulatory, reputational, and operational risks &#8211; making them essential focus areas for cybersecurity leaders aiming for strategic compliance.</p><h2>Challenge 1: The Explainability Imperative</h2><p>As regulatory demands for transparency rise, AI systems must now explain their decisions clearly, consistently, and in human-understandable terms: no small feat for complex neural networks trained to detect subtle, high-dimensional security anomalies. Explainability is no longer just a technical nice-to-have; it&#8217;s a compliance obligation and a business risk. For security operations, this introduces tough trade-offs between model performance, intellectual property protection, and regulatory accountability. The real-world impact: teams must generate detailed audit trails and decision rationales for AI-driven alerts: adding operational overhead and potentially slowing response times, especially where explainability was not considered during system design.</p><h2>Challenge 2: The Cross-Border Data Dilemma</h2><p>AI-driven cybersecurity relies on large-scale, diverse data sources &#8211; logs, behaviours, threat intelligence &#8211; often collected across global infrastructure. But as data sovereignty laws like the EU&#8217;s GDPR and China&#8217;s PIPL tighten cross-border data controls, organizations face a critical tension: global visibility versus local compliance.<br>Efforts to regionalize data to meet legal requirements can fragment security architectures, limit threat model accuracy, and reduce detection precision. Security teams must now design architectures that reconcile regulatory fragmentation with the operational need for unified situational awareness. This is an increasingly complex balancing act.</p><h2>Challenge 3: Third-Party AI Risk Management</h2><p>Using third-party AI platforms no longer limits liability, it extends it. Regulators increasingly hold organizations accountable for the behaviour of external AI systems, even those they don&#8217;t build or directly control. This raises the stakes for cybersecurity and risk teams, who must now conduct rigorous due diligence across the entire vendor lifecycle. From data sourcing and model validation to explainability, monitoring, and incident response readiness, the governance burden has shifted sharply to the end user. As AI supply chains expand &#8211; often including fourth-party dependencies &#8211; managing third-party risk becomes not only more complex, but more critical to maintaining both compliance and operational integrity.</p><p>Each of these challenges brings real operational pressure &#8211; added complexity, higher costs, and new layers of accountability. But when approached strategically, these same regulatory demands can become powerful levers for innovation, resilience, and competitive advantage. In the next section, we&#8217;ll explore how forward-thinking organizations are doing exactly that &#8211; transforming compliance constraints into catalysts for stronger, smarter cybersecurity.</p><h1>From Roadblocks to Runways - Turning Compliance Challenges into Operational Opportunities</h1><p>While compliance obligations tend to be viewed as obstacles to creative processes and operational efficacy, visionary leaders should view them as powerful catalysts for operational innovation. Each regulatory challenge outlined previously &#8211; whether it's model explainability, cross&#8211;border data flows, or third&#8211;party risk management &#8211; offers unique opportunities to enhance cybersecurity capabilities and resilience. Turning the regulatory burden into strategic differentiators.</p><p>Rather than passively adapting to regulatory demands, organizations should leverage these challenges as moments for strategic improvement. In the following cases, we'll demonstrate how leading cybersecurity teams have successfully turned what initially seemed restrictive into tangible operational strengths. These real&#8211;world examples underscore how a thoughtful approach to compliance can drive innovation, improve efficiency, and ultimately strengthen security posture.</p><h2>The Explainability Imperative: From Burden to Advantage (challenge 1)</h2><p>With EU&#8217;s AI Act becoming enacted, a European financial services firm faced a daunting new requirement: providing detailed explanations for every AI&#8211;driven security alert. The security operations team, already overwhelmed with thousands of daily alerts, feared explainability would drastically slow operations. Initially, the operations team estimated a 40% increase in analyst workload and significant delays in incident responses &#8211; risks and costs they couldn&#8217;t afford.</p><p><strong>The Strategic Pivot: </strong>Rather than layering explanations onto existing systems, the team embraced "Compliance by Design," integrating explainability from the ground up and as a non&#8211;functional requirement from the start. They engineered a hybrid AI architecture, combining deep&#8211;learning models (for detecting subtle threats) with transparent decision&#8211;tree models (for generating clear, audit&#8211;friendly explanations) validated though several steps of model governance and risk reviews.</p><p>&#8220;We created a system where one part detects threats and another simultaneously explains why they matter,&#8221; the SOC Manager explained. The AI now generated standardized explanations outlining anomalies, baseline comparisons, key factors, and confidence metrics, supported by intuitive visualizations understandable by junior analysts and auditors.</p><p><strong>The Results: </strong>The compliance&#8211;driven overhaul produced unexpected operational benefits:</p><div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!tFwb!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F8f64e739-5472-40cb-a7f5-dfad8f2c06d3_1057x322.png" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!tFwb!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F8f64e739-5472-40cb-a7f5-dfad8f2c06d3_1057x322.png 424w, https://substackcdn.com/image/fetch/$s_!tFwb!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F8f64e739-5472-40cb-a7f5-dfad8f2c06d3_1057x322.png 848w, https://substackcdn.com/image/fetch/$s_!tFwb!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F8f64e739-5472-40cb-a7f5-dfad8f2c06d3_1057x322.png 1272w, https://substackcdn.com/image/fetch/$s_!tFwb!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F8f64e739-5472-40cb-a7f5-dfad8f2c06d3_1057x322.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!tFwb!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F8f64e739-5472-40cb-a7f5-dfad8f2c06d3_1057x322.png" width="1057" height="322" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/8f64e739-5472-40cb-a7f5-dfad8f2c06d3_1057x322.png&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:322,&quot;width&quot;:1057,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:37650,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:&quot;image/png&quot;,&quot;href&quot;:null,&quot;belowTheFold&quot;:true,&quot;topImage&quot;:false,&quot;internalRedirect&quot;:&quot;https://tekk.substack.com/i/159857397?img=https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F8f64e739-5472-40cb-a7f5-dfad8f2c06d3_1057x322.png&quot;,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!tFwb!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F8f64e739-5472-40cb-a7f5-dfad8f2c06d3_1057x322.png 424w, https://substackcdn.com/image/fetch/$s_!tFwb!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F8f64e739-5472-40cb-a7f5-dfad8f2c06d3_1057x322.png 848w, https://substackcdn.com/image/fetch/$s_!tFwb!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F8f64e739-5472-40cb-a7f5-dfad8f2c06d3_1057x322.png 1272w, https://substackcdn.com/image/fetch/$s_!tFwb!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F8f64e739-5472-40cb-a7f5-dfad8f2c06d3_1057x322.png 1456w" sizes="100vw" loading="lazy"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a></figure></div><p>Crucially, the relationship with regulators shifted from adversarial to collaborative. Regulators praised the firm&#8217;s explainability approach as an industry benchmark.</p><p><strong>Broader Insight: </strong>The firm&#8217;s experience highlights a crucial insight: regulatory mandates, initially perceived as operational burdens, can serve as powerful catalysts for improved performance. By reframing explainability as a core strength rather than a compliance constraint, the company enhanced both regulatory standing and operational effectiveness; demonstrating that compliance and performance are not opposing forces but strategic allies in building robust, trustworthy AI security systems.</p><h2>Data Sovereignty: From Fragmentation to Global Advantage (challenge 2)</h2><p>In another case, a global bank had long relied on a centralized global cybersecurity data lake to power AI&#8211;driven cyber analytics, enabling rapid threat detection across continents. But as strict data sovereignty laws started to emerge &#8211; first from EU's GDPR to China's stringent data regulations &#8211; the centralized data lake model faced serious regulatory and legal challenges. Initially, the cyber team feared that forced regional data fragmentation would severely weaken their threat detection capability as internal compliance teams began questioning the centralized SOC architecture.</p><p><strong>The Strategic Pivot: </strong>Instead of resisting data sovereignty, the bank embraced it &#8211; redesigning its architecture into what it internally termed a <em>&#8220;Sovereign Security Mesh.&#8221;</em> This novel concept fused the principles of security mesh architecture with the realities of regional data sovereignty. The approach aimed to preserve global threat visibility while respecting local compliance mandates, creating a federated but collaborative security model. They established regional security hubs, each independently analysing data locally to comply with regulations. These hubs securely shared anonymized, aggregated threat insights with a global coordination platform, balancing local compliance with global threat visibility. As their Group Enterprise Architect summarized: "We created a federation of regional SOCs, each compliant yet collaboratively strong."</p><p><strong>The Results: </strong>This compliance&#8211;driven redesign unexpectedly improved security and operational agility:</p><div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!YHSN!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F1e05f3ba-b916-47b6-a961-f2a09790f07d_1056x322.png" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!YHSN!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F1e05f3ba-b916-47b6-a961-f2a09790f07d_1056x322.png 424w, https://substackcdn.com/image/fetch/$s_!YHSN!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F1e05f3ba-b916-47b6-a961-f2a09790f07d_1056x322.png 848w, https://substackcdn.com/image/fetch/$s_!YHSN!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F1e05f3ba-b916-47b6-a961-f2a09790f07d_1056x322.png 1272w, https://substackcdn.com/image/fetch/$s_!YHSN!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F1e05f3ba-b916-47b6-a961-f2a09790f07d_1056x322.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!YHSN!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F1e05f3ba-b916-47b6-a961-f2a09790f07d_1056x322.png" width="1056" height="322" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/1e05f3ba-b916-47b6-a961-f2a09790f07d_1056x322.png&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:322,&quot;width&quot;:1056,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:41452,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:&quot;image/png&quot;,&quot;href&quot;:null,&quot;belowTheFold&quot;:true,&quot;topImage&quot;:false,&quot;internalRedirect&quot;:&quot;https://tekk.substack.com/i/159857397?img=https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F1e05f3ba-b916-47b6-a961-f2a09790f07d_1056x322.png&quot;,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!YHSN!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F1e05f3ba-b916-47b6-a961-f2a09790f07d_1056x322.png 424w, https://substackcdn.com/image/fetch/$s_!YHSN!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F1e05f3ba-b916-47b6-a961-f2a09790f07d_1056x322.png 848w, https://substackcdn.com/image/fetch/$s_!YHSN!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F1e05f3ba-b916-47b6-a961-f2a09790f07d_1056x322.png 1272w, https://substackcdn.com/image/fetch/$s_!YHSN!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F1e05f3ba-b916-47b6-a961-f2a09790f07d_1056x322.png 1456w" sizes="100vw" loading="lazy"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a></figure></div><p>Moreover, localized threat modelling revealed nuanced regional threat patterns previously hidden in global datasets, significantly enhancing overall detection precision.</p><p><strong>Broader Insight</strong>: The bank&#8217;s experience underscores a powerful truth: when approached strategically, compliance challenges can drive architectural innovation. By turning regulatory constraints into a catalyst for distributed security excellence, the bank not only addressed compliance obligations &#8211; it built a more resilient, adaptive global cybersecurity framework for today&#8217;s fragmented regulatory landscape.</p><p>Most compelling is how this approach evolves the principles of Cybersecurity Mesh Architecture (CSMA) &#8211; recognized by frameworks such as those from Gartner and NIST, which emphasize distributed controls, interoperability, and identity&#8211;centric trust zones. Introducing data sovereignty as a central design constraint pushes this paradigm further, adapting mesh architecture to meet the rising demands of data localization.</p><p>While <em>&#8220;Sovereign Security Mesh&#8221;</em> is not yet an industry standard, it represents an emerging architectural pattern. As more organizations pursue hybrid models that reconcile global visibility with regional compliance, this approach may well evolve into a reference model for cybersecurity in regulated environments</p><p><strong>Broader Insight: </strong>The bank&#8217;s experience underscores a powerful truth: compliance challenges, viewed strategically, can drive architectural innovation. By turning regulatory constraints into a catalyst for distributed security excellence, the bank not only solved their compliance challenge&#8212;they created a stronger, more adaptive global cybersecurity framework designed explicitly for today&#8217;s increasingly regulated digital landscape.</p><h2>Third-Party AI Governance: From Liability Risk to Strategic Advantage (challenge 3)</h2><p>A European bank operating under a bancassurance model had partnered with third&#8211;party insurers to offer embedded insurance products across its retail channels. While this allowed the bank to scale its insurance offerings without directly underwriting risk, it also meant relying on external partners &#8211; and their vendors (creating 4<sup>th</sup> party dependencies) &#8211; for core services like policy issuance, claims triage, and fraud detection.</p><p>In one such partnership, the bank worked with an insurer using an AI&#8211;assisted SaaS platform to automate claims assessment and fraud detection &#8211; part of a growing trend across the insurance sector. These platforms, typically designed and operated by external vendors, offer efficiency and scale but introduce significant governance challenges around transparency, explainability, and accountability.</p><p>As regulatory scrutiny intensified, the bank recognized a critical blind spot: under GDPR and forthcoming AI regulations, organizations remain accountable for outcomes that affect customers &#8211; even when those decisions are made by third&#8211;party algorithms. In the eyes of the customer, it was the bank that bore the responsibility for unfair or opaque claims decisions &#8211; not the insurer, and certainly not the vendor behind the platform.</p><p>While this legal position is well established, it is often underappreciated in practice &#8211; especially as AI introduces new layers of complexity and distributed accountability. Recognizing this risk exposure, the bank shifted from a transactional vendor model to a strategic AI governance framework designed to ensure full oversight, auditability, and compliance across all external AI tools.</p><p><strong>The Strategic Pivot</strong>: Acknowledging that accountability could not be outsourced, the bank established a cross&#8211;functional AI Governance Committee that brought together cybersecurity, compliance, legal, and procurement leaders. This wasn&#8217;t merely a policy update &#8211; it represented a fundamental overhaul of the organization&#8217;s Third&#8211;Party Management (TPM) model.</p><p>Too often, TPM and procurement operate in operational silos, disconnected from cybersecurity uplift efforts. As a result, general cyber risks &#8211; and AI&#8211;specific risks in particular &#8211; go unaddressed in vendor onboarding and lifecycle oversight. This governance pivot bridged that gap.</p><p>The committee introduced an advanced AI Vendor Assessment Framework &#8211; but the true innovation lay in execution. Rather than leaving evaluation to procurement alone, the bank embedded cybersecurity and AI experts directly into the vendor assessment lifecycle. Third&#8211;party platforms were now rigorously vetted across dimensions such as data governance, algorithmic transparency and explainability (AIX), model validation, bias mitigation, data loss prevention (DLP), and incident response readiness.</p><p>To operationalize this at scale, the team combined internal SME capability&#8211;building with targeted automation &#8211; leveraging RegTech tools to continuously monitor vendor systems, automate reassessments, and flag compliance drift in real time.<a class="footnote-anchor" data-component-name="FootnoteAnchorToDOM" id="footnote-anchor-1" href="#footnote-1" target="_self">1</a></p><p>Ownership of the framework remained with the AI Governance Committee, but implementation was distributed: cybersecurity led technical evaluations, legal enforced contractual safeguards, and procurement aligned onboarding and renewal processes with governance standards. This cross&#8211;functional integration transformed third&#8211;party risk management from a static procurement checklist into a dynamic, security&#8211;first discipline.</p><p>Explainability became especially critical &#8211; not just to satisfy regulators, but to enable internal teams to audit, understand, and defend AI&#8211;driven decisions made outside the organization. In a compliance environment where opacity equates to risk, AIX became a foundational requirement for defensibility and trust.</p><p><em>&#8220;This was a cultural shift as much as a technical one &#8211; transparency became our minimum standard for every AI system, internal or external.&#8221; &#8211; Chair, AI Governance Committee</em></p><p><strong>The Results</strong>: These improvements weren&#8217;t incidental &#8211; they were the direct result of embedding cybersecurity and AI expertise into third&#8211;party assessments, automating compliance oversight through RegTech tools, and enforcing standardized contractual safeguards. This proactive governance transformation yielded immediate and measurable business benefits:</p><div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!KhD2!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fa95fa32e-0a64-479d-8fa9-efed18ba0814_1057x478.png" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!KhD2!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fa95fa32e-0a64-479d-8fa9-efed18ba0814_1057x478.png 424w, https://substackcdn.com/image/fetch/$s_!KhD2!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fa95fa32e-0a64-479d-8fa9-efed18ba0814_1057x478.png 848w, https://substackcdn.com/image/fetch/$s_!KhD2!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fa95fa32e-0a64-479d-8fa9-efed18ba0814_1057x478.png 1272w, https://substackcdn.com/image/fetch/$s_!KhD2!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fa95fa32e-0a64-479d-8fa9-efed18ba0814_1057x478.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!KhD2!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fa95fa32e-0a64-479d-8fa9-efed18ba0814_1057x478.png" width="1057" height="478" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/a95fa32e-0a64-479d-8fa9-efed18ba0814_1057x478.png&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:478,&quot;width&quot;:1057,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:62918,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:&quot;image/png&quot;,&quot;href&quot;:null,&quot;belowTheFold&quot;:true,&quot;topImage&quot;:false,&quot;internalRedirect&quot;:&quot;https://tekk.substack.com/i/159857397?img=https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fa95fa32e-0a64-479d-8fa9-efed18ba0814_1057x478.png&quot;,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!KhD2!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fa95fa32e-0a64-479d-8fa9-efed18ba0814_1057x478.png 424w, https://substackcdn.com/image/fetch/$s_!KhD2!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fa95fa32e-0a64-479d-8fa9-efed18ba0814_1057x478.png 848w, https://substackcdn.com/image/fetch/$s_!KhD2!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fa95fa32e-0a64-479d-8fa9-efed18ba0814_1057x478.png 1272w, https://substackcdn.com/image/fetch/$s_!KhD2!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fa95fa32e-0a64-479d-8fa9-efed18ba0814_1057x478.png 1456w" sizes="100vw" loading="lazy"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a></figure></div><p>While vendor reluctance was real at first, the company&#8217;s clear standards, contractual enforcement, and cross&#8211;functional engagement gradually overcame resistance. Over time, vendors recognized the strategic value of aligning with a forward&#8211;thinking client and began integrating these standards into their own governance practices, strengthening security on both sides of the partnership. The result highlights a broader truth: when approached strategically, governance uplift benefits everyone in the supply chain.</p><p><strong>Broader Insight: </strong>This case highlights a growing imperative in the financial services and insurance sectors, where third&#8211; and fourth&#8211;party AI SaaS solutions are increasingly being deployed: strategic third&#8211;party AI governance is no longer a regulatory formality &#8211; it&#8217;s a business&#8211;critical capability. As banks and insurers rely more heavily on external platforms to deliver AI&#8211;enabled services, the line of accountability remains firmly with the organization that faces the customer.</p><p>By moving from a transactional vendor model to a strategic governance approach, the bank built a more secure, transparent, and resilient ecosystem &#8211; one where risk is distributed but accountability remains clear. In doing so, it transformed regulatory pressure into a foundation for trust, defensibility, and long&#8211;term competitive advantage.</p><h1>Beyond Compliance: Strategic Imperatives for Business Leadership</h1><p>When viewed through a strategic lens, AI regulatory compliance transcends mere regulatory adherence to become a catalyst for fundamental business transformation. The case studies we've explored reveal several powerful strategic insights that cross functional boundaries and offer lasting competitive advantage.</p><h2>Trust as Strategic Currency in a Digital Economy</h2><p>Where data breaches and AI mishaps can destroy trust and market value in an instant, trust has evolved from a soft value into perhaps an organization's most valuable strategic asset. Our case studies demonstrate how companies that proactively embed compliance into their operations &#8211; whether through explainability, fairness testing, or rigorous third&#8211;party governance &#8211; manage to create unprecedented levels of stakeholder trust. This trust manifests as tangible business value: accelerated customer acquisition, premium pricing power, stronger investor confidence, and enhanced talent attraction. Far from being merely a compliance exercise, transparent and ethical AI deployment represents a foundational strategy for market leadership in an increasingly sceptical digital marketplace.</p><h2>Organizational Resilience Through Integrated Governance</h2><p>The organizations in our case studies that approached compliance strategically didn't simply bolt on governance processes. They fundamentally rewired their operational DNA. By embedding governance at every level, from AI development pipelines to vendor management frameworks, these companies developed an intrinsic organizational resilience. This compliance&#8211;integrated approach allows the organisation to rapidly adapt to changing regulatory environments while simultaneously reinforcing business continuity, reducing operational vulnerabilities, and enhancing decision&#8211;making quality. The strategic value isn't just in meeting today's compliance requirements, but in building inherent adaptability; thus developing a whole new set of competitive advantages.</p><h2>Value Creation Through Ethical AI Leadership</h2><p>Perhaps most significantly, our case studies reveal how ethical AI leadership actively creates business value rather than merely preserving it. Whether through the fintech's enhanced investment valuation or the healthcare provider's reduced liability costs, proactive compliance consistently unlocks measurable financial returns. These companies demonstrate that ethical AI isn't a cost centre but a value generator. One that creates competitive differentiation, drives innovation through constraint, attracts premium partnerships, and opens markets that remain closed to less trustworthy competitors. In essence, ethical AI leadership transforms compliance from a liability into a strategic asset class with compounding returns.</p><h1>Making Strategic Compliance a Reality &#8211; Practical Implementation Strategies</h1><p>To transform the strategic imperatives of trust, resilience, and value creation into organizational reality, cybersecurity leaders need practical implementation approaches. The following strategies provide a roadmap for embedding compliance into your operations in ways that directly support these strategic objectives.</p><p>Understanding the strategic benefits of proactive AI compliance is only the first step. To fully realize these advantages, cybersecurity leaders need practical, actionable strategies to embed compliance effectively within their operations. Below, I outline four proven approaches that leading organizations have successfully adopted &#8211; see these as starting points and not an exhaustive list.</p><p>Leveraging these strategies, your company will be able to ensure that compliance becomes integrated deeply into the organization's cybersecurity DNA, rather than being treated as an afterthought.</p><h2>1. Adopt "Compliance by Design" from Day One</h2><p>Don&#8217;t retrofit compliance into existing systems; embed it into the AI services lifecycle from the outset. This means involving cybersecurity, compliance, legal, and data science teams together at the initial stages of AI model development. Clearly define transparency, explainability, and fairness standards early, and rigorously enforce them throughout the AI design, training, and deployment phases.</p><p><strong>Actionable Steps:</strong></p><ul><li><p>Develop an AI compliance checklist to guide early-stage model development.</p></li><li><p>Require that all new AI cybersecurity initiatives include explicit compliance impact assessments before approval.</p></li><li><p>Establish structured cross-functional collaboration from the outset&#8212;engaging stakeholders across cybersecurity, engineering, procurement, legal, and compliance to ensure AI systems are designed with shared accountability and holistic oversight..</p></li></ul><h2>2. Establish Cross-Functional AI Governance Committees</h2><p>Break down operational silos by creating a standing governance body specifically dedicated to AI compliance. Include stakeholders from cybersecurity operations, compliance, legal, data governance, procurement, and risk management. These committees become the backbone for ongoing vendor evaluations, policy development, compliance oversight, and quick adaptation to regulatory changes.</p><p><strong>Actionable Steps:</strong></p><ul><li><p>Define clear governance objectives, roles, and decision-making authority in the committee charter.</p></li><li><p>Schedule regular committee sessions focused on AI risk exposure, vendor oversight, and regulatory alignment.</p></li><li><p>Integrate the committee's outputs into broader enterprise risk and compliance workflows to ensure AI-related risks are addressed holistically &#8211; not in isolation.</p></li></ul><h2>3. Leverage Automation and RegTech Tools</h2><p>Meeting regulatory demands often creates significant administrative overhead&#8212;but AI itself can help close that gap. Modern Regulatory Technology (RegTech) solutions now offer practical ways to streamline compliance documentation, monitor third-party risk, validate model behavior, and detect emerging compliance issues in real time.</p><p><strong>Actionable Next Steps:</strong></p><ul><li><p>Pilot automated compliance documentation and audit tools in high-impact areas such as SOC alerts, third-party vendor monitoring, and AI explainability reporting.</p></li><li><p>Integrate real-time compliance controls into AI model testing and deployment pipelines&#8212;ensuring issues are flagged before reaching production.</p></li><li><p>Evaluate RegTech platforms that support ongoing monitoring, governance dashboards, and policy mapping aligned to frameworks such as GDPR, NIST AI RMF, and the EU AI Act.</p></li></ul><h2>4. Proactively Engage with Industry Standards and Regulatory Bodies</h2><p>Rather than passively responding to regulatory change, leading organizations shape the compliance landscape through proactive engagement. Participating in standards-setting bodies (e.g., NIST, ISO, and sector-specific forums) allows companies to anticipate emerging requirements, influence policy direction, and align industry guidance with operational realities.</p><p><strong>Actionable Next Steps:</strong></p><ul><li><p>Assign internal experts to participate in relevant working groups, advisory panels, or regulatory consultations on AI governance and cybersecurity.</p></li><li><p>Contribute to public dialogue by publishing insights, case studies, or position papers that advocate for practical, risk-based approaches to AI compliance.</p></li><li><p>Track and map emerging regulatory frameworks (e.g., EU AI Act, DORA, NIST AI RMF) to organizational capabilities&#8212;and ensure the necessary skills and ownership are distributed across cybersecurity, compliance, legal, and engineering functions. This positions your teams to lead, not lag, as regulations evolve.</p></li></ul><h1>Leading from the Front &#8211; Compliance as Competitive Advantage</h1><p>Viewing compliance as a mere obligation is no longer sustainable. Organizations that approach it strategically are unlocking measurable value &#8211; across trust, agility, and risk resilience. As demonstrated throughout this article, companies that proactively embed compliance into their cybersecurity practices can transform regulatory burdens into strategic advantages &#8211; enhancing trust, increasing operational agility, and reducing long-term risk.</p><p><a href="https://www.tekk-talk.com/p/the-new-battlefield-ai-in-cyber-attacks">Visionary cybersecurity leadership</a> recognizes that compliance isn&#8217;t a distraction &#8211; it&#8217;s a lever. Since opting out of regulation isn&#8217;t an option, the smart move is to adopt a compliance-first mindset &#8211; supported by strategies like Compliance by Design, cross-functional governance, automation, and proactive engagement with evolving standards. These aren't just best practices &#8211; they're enablers of sustained competitive advantage in a regulatory environment that will only become more complex.</p><p>The message is clear: compliance doesn&#8217;t have to slow your business down. It can help drive it forward &#8211; enabling stronger security, deeper stakeholder trust, and greater resilience against both cyber threats and regulatory uncertainty.</p><p>How is your organization approaching AI compliance today? Are you still viewing it as a checkbox exercise, or are you seizing it as a strategic opportunity? Let&#8217;s continue the conversation &#8211; because in a world where innovation is constant, leadership must come from the front.</p><p>In the next article of this series, we&#8217;ll examine why this strategic approach to compliance becomes even more critical when confronting the next frontier: adversarial AI. As machine learning systems are weaponized against the very defences designed to protect us, only those organizations that have embedded compliance into their operational DNA will be able to detect, respond to, and adapt to these evolving threats &#8211; while staying on the right side of the regulatory line.</p><div class="footnote" data-component-name="FootnoteToDOM"><a id="footnote-1" href="#footnote-anchor-1" class="footnote-number" contenteditable="false" target="_self">1</a><div class="footnote-content"><p>While specific solutions used remain confidential, there are several widely adopted RegTech platforms that support similar functions, including OneTrust (third&#8211;party risk and AI governance), LogicGate (customizable GRC workflows for AI systems), and TrustArc (automated privacy and AI impact assessments). These tools typically support continuous monitoring, explainability documentation, model risk workflows, and vendor lifecycle compliance aligned with frameworks such as GDPR, NIST AI RMF, and the EU AI Act.</p><p></p></div></div>]]></content:encoded></item><item><title><![CDATA[AI Regulation & Compliance: Mapping the Global Landscape]]></title><description><![CDATA[In the opening article of this series, I highlighted how AI is transforming cybersecurity at unprecedented speed.]]></description><link>https://www.tekk-talk.com/p/ai-regulations-around-the-world</link><guid isPermaLink="false">https://www.tekk-talk.com/p/ai-regulations-around-the-world</guid><dc:creator><![CDATA[Dennis Lindwall]]></dc:creator><pubDate>Wed, 12 Mar 2025 20:30:43 GMT</pubDate><enclosure url="https://substackcdn.com/image/fetch/$s_!bZv2!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F83c35ec2-5866-4d09-80d4-96859559cda0_1792x1024.webp" length="0" type="image/jpeg"/><content:encoded><![CDATA[<div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!bZv2!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F83c35ec2-5866-4d09-80d4-96859559cda0_1792x1024.webp" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!bZv2!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F83c35ec2-5866-4d09-80d4-96859559cda0_1792x1024.webp 424w, https://substackcdn.com/image/fetch/$s_!bZv2!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F83c35ec2-5866-4d09-80d4-96859559cda0_1792x1024.webp 848w, https://substackcdn.com/image/fetch/$s_!bZv2!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F83c35ec2-5866-4d09-80d4-96859559cda0_1792x1024.webp 1272w, https://substackcdn.com/image/fetch/$s_!bZv2!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F83c35ec2-5866-4d09-80d4-96859559cda0_1792x1024.webp 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!bZv2!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F83c35ec2-5866-4d09-80d4-96859559cda0_1792x1024.webp" width="1456" height="832" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/83c35ec2-5866-4d09-80d4-96859559cda0_1792x1024.webp&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:832,&quot;width&quot;:1456,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:573718,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:&quot;image/webp&quot;,&quot;href&quot;:null,&quot;belowTheFold&quot;:false,&quot;topImage&quot;:true,&quot;internalRedirect&quot;:&quot;https://tekk.substack.com/i/158917849?img=https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F83c35ec2-5866-4d09-80d4-96859559cda0_1792x1024.webp&quot;,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!bZv2!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F83c35ec2-5866-4d09-80d4-96859559cda0_1792x1024.webp 424w, https://substackcdn.com/image/fetch/$s_!bZv2!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F83c35ec2-5866-4d09-80d4-96859559cda0_1792x1024.webp 848w, https://substackcdn.com/image/fetch/$s_!bZv2!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F83c35ec2-5866-4d09-80d4-96859559cda0_1792x1024.webp 1272w, https://substackcdn.com/image/fetch/$s_!bZv2!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F83c35ec2-5866-4d09-80d4-96859559cda0_1792x1024.webp 1456w" sizes="100vw" fetchpriority="high"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a></figure></div><p>In the opening article of this series, I highlighted how AI is transforming cybersecurity at unprecedented speed. Beyond its impact on tools and tactics, a critical new battlefield has emerged: regulation and compliance. For financial services professionals, this represents yet more regulatory frameworks to monitor, while organizations in traditionally unregulated sectors face GDPR-like challenges. Security teams that once concentrated solely on technical defences must now navigate an evolving maze of global rules - spanning model risk, data protection, and product liability - regardless of whether AI powers customer-facing chatbots or backend decision support systems.</p><p>This second instalment examines the diverse regulatory approaches emerging across major global jurisdictions. From the EU's comprehensive framework to the US's sectoral patchwork, from China's state-directed control to the varied models across Asia, cybersecurity teams face a complex tapestry of rules that vary dramatically by region. Understanding these different regulatory philosophies is the first step toward developing effective compliance strategies for AI-powered security tools.</p><p>The stakes are clear: inadequate compliance risks penalties and reputational damage, while treating regulation as a mere checklist exercise prevents organizations from capturing AI's full benefits. By mapping the global regulatory landscape, we can better understand the challenges and opportunities that lie ahead for cybersecurity teams navigating this new frontier.</p><h2>Global Regulatory Frameworks</h2><h4>EU AI Act in Action</h4><p>The EU AI Act<a class="footnote-anchor" data-component-name="FootnoteAnchorToDOM" id="footnote-anchor-1" href="#footnote-1" target="_self">1</a>, perhaps the most comprehensive piece of legislation in this space, introduces a new level of scrutiny for cybersecurity AI tools, classifying many as &#8220;high-risk&#8221;. This means strict documentation, oversight, and human review requirements - a direct challenge to AI-driven, real-time security models. For instance, organizations must conduct detailed audits of training data, model performance, and real-world outcomes - an obligation that can clash with the &#8220;black box&#8221; nature of many sophisticated AI systems.</p><p>Specifically, the Act mandates human oversight at critical junctures of the model lifecycle. This requirement can be particularly challenging when dealing with anomaly detection or threat-hunting models that rely on autonomous, real-time responses, and becomes even more challenging when models are developed by third-parties or bought as SaaS solutions. As a result, security and risk teams must build in review processes, often slowing down workflows designed for speed.</p><p>The EU AI Act establishes a clear line in the sand with its "Unacceptable Risk" category - AI models/systems deemed so risky they're outright prohibited<a class="footnote-anchor" data-component-name="FootnoteAnchorToDOM" id="footnote-anchor-2" href="#footnote-2" target="_self">2</a>. These systems are considered fundamentally incompatible with EU values and human rights protections.</p><p>This "prohibited" category includes AI systems designed for social scoring by governments, biometric identification systems used for real-time surveillance in public spaces (with limited exceptions for law enforcement), emotion recognition in workplaces or educational settings, and systems that exploit vulnerabilities of specific groups or use subliminal manipulation techniques.</p><p>Far from being theoretical concerns, I've observed numerous real-world examples of ethically dubious and outright wrong AI solutions in commercial test and development settings. In recent months, I encountered an early stage pilot project of an clearly "well-intentioned" AI solution that would land squarely in the EU's "unacceptable risk" category. This early stage pilot project involved a solution that performed live facial recognition via security cameras and used AI to analyse the customers' facial expressions as they entered the store to provide helpful alerts to staff about who might "need additional assistance." </p><p>While the developers framed this as a customer service enhancement, examples like this represent precisely the kind of technology the EU AI Act aims to restrict due to their potential for privacy violations and manipulation. Needless to say, this project got shut down by the AI oversight board but the case highlights how easily the ethical and legal boundaries are crossed and why robust AI governance structures are needed in the organisation. The regulations themselves are there to prevent the interpretation of the rules in the wrong way, especially by those who seek out the &#8220;grey areas&#8221; of ethics because that&#8217;s where they have identified profitable markets.</p><p>For cybersecurity professionals, this establishes important boundaries. While offensive security tools often involve techniques that could potentially cross into prohibited territory - particularly those leveraging behavioural analysis, vulnerability exploitation, or manipulative social engineering - they must now be designed with these restrictions in mind. The prohibition on exploiting vulnerabilities of specific groups is particularly relevant for red teams and penetration testers who must ensure their AI-enhanced tools don't disproportionately target or exploit protected characteristics or vulnerable populations, even when simulating sophisticated threat actors who might do exactly that. This creates a unique challenge: cybersecurity tools must be sophisticated enough to counter AI-driven threats yet constrained by legal and ethical boundaries. Organizations developing next-generation security platforms must now incorporate these restrictions into their design philosophy from the ground up, rather than as compliance afterthoughts.</p><p>In financial services, the EU AI Act creates significant ambiguity around AI-driven credit decisioning. Where exactly is the boundary drawn between legitimate credit risk assessment - a core banking function - and prohibited "social scoring"? Consider an AI system that dynamically adjusts credit worthiness based on spending patterns, payment timing, and transaction locations. While these factors have long been part of traditional credit models, when an AI system makes real-time decisions incorporating behavioural data from multiple sources, it begins to resemble the kind of comprehensive behavioural scoring that the Act restricts. Financial institutions must now carefully examine whether their advanced credit AI systems remain within the "legitimate business purpose" exception or potentially cross into prohibited territory&#8212;particularly when these systems incorporate non-traditional data points or create feedback loops that might disproportionately impact certain customer segments.</p><h4>Fragmentation in the United States: A Sectoral Patchwork</h4><p>Unlike the EU's comprehensive AI governance framework, the United States has pursued a fragmented, sector-specific regulatory strategy that lacks federal cohesion. This approach has created an increasingly complex compliance environment, particularly for organizations developing and deploying AI-powered cybersecurity solutions.</p><p>The regulatory landscape has become even more uncertain following the January 2025 rescission of Biden administration&#8217;s executive order 14110 ("Safe, Secure, and Trustworthy Artificial Intelligence"). This EO had laid out broad AI governance principles at the federal level but didn&#8217;t align with the objectives and priorities of the Trump administration; this administration prioritizes deregulation and market-led AI development. This reversal reflects a fundamental philosophical policy shift - a belief that regulation inherently stifles innovation and undermines U.S. technological leadership. While the new executive order mandates an Artificial Intelligence action plan to be developed in 2025, industry scepticism about meaningful regulatory guidance from this administration remains high. Some prominent AI researchers, including those from major tech companies, have expressed concern that regulatory uncertainty, rather than deregulation itself, may ultimately hinder U.S. AI advancement<a class="footnote-anchor" data-component-name="FootnoteAnchorToDOM" id="footnote-anchor-3" href="#footnote-3" target="_self">3</a>.</p><p>In the absence of a coherent federal strategy, states have begun establishing their own AI governance frameworks, creating a pattern reminiscent of how data privacy regulation evolved in the U.S. Just as CCPA and the NY SHIELD Act created de facto national privacy standards, emerging state AI regulations are establishing a patchwork of compliance requirements.</p><p>This implies that states like California and New York are likely to lead that way for a national AI roadmap. The currently proposed and enacted legislation in both states are addressing AI safety assessments, public sector AI use, deepfake protections, and algorithmic discrimination.</p><p>Another point to note is that these state-level initiatives vary significantly in scope and approach - from California's attempt at comprehensive AI model oversight to New York's targeted focus on government AI applications, so it is likely some form of regulatory cohesion between states will emerge.</p><p>Finally, it&#8217;s worth noting that as more states develop their own frameworks the compliance landscape may grow increasingly complex for companies with  operations across multiple states. At least until either federal regulations or national frameworks and developed to guide AI implementation and oversight with some form of alignment between state legislation<a class="footnote-anchor" data-component-name="FootnoteAnchorToDOM" id="footnote-anchor-4" href="#footnote-4" target="_self">4</a>.</p><p>Even within the federal regulatory sphere, the lack of overarching AI guidance has resulted in inconsistent standards across industries. This sectoral fragmentation creates particular challenges for cybersecurity teams, as AI systems deemed compliant in one industry context may require substantial modification for deployment in another. Organizations operating across multiple sectors face the added burden of reconciling these inconsistent requirements.</p><p>Solutions for this could potentially be accelerated though organisations like NIST who sets the standards for both government entities and private sectors but in light of the current administration&#8217;s mindset, this is yet to be firmed up.</p><p>The fragmented regulatory environment creates several operational challenges for security professionals deploying AI-powered tools which echoes the concerns from AI Industry experts that some form of federal guidance if not regulation is needed to reduce:</p><ul><li><p><strong>Strategic Uncertainty</strong>: Security architects must design systems flexible enough to adapt to rapidly evolving and unpredictable regulatory requirements</p></li><li><p><strong>Compliance Overhead</strong>: Organizations must monitor and interpret regulations across multiple states and sectors, diverting resources from security improvements</p></li><li><p><strong>Innovation Constraints</strong>: More stringent state regulations may create barriers to adopting advanced AI security capabilities in certain jurisdictions</p></li><li><p><strong>Competitive Implications</strong>: Organizations operating primarily in less regulated states may gain security advantages through more agile AI deployment</p></li></ul><p>While the current trajectory points toward continued regulatory fragmentation, market pressure for consistency may eventually drive greater alignment&#8212;potentially even with international standards. In the meantime, organizations must develop adaptive compliance strategies that can navigate this complex landscape while maintaining effective security postures.</p><p>The fundamental question remains whether the U.S. approach of limited federal oversight will ultimately help or hinder AI adoption in cybersecurity, and evolution of AI globally. While it may accelerate innovation in some contexts, it also creates risks of unregulated AI deployment - including the very issues of model bias, hallucination, and unethical AI applications that more comprehensive frameworks like the EU AI Act explicitly address. </p><h4>China: AI as a Strategic Asset</h4><p>China's regulatory framework for AI is deeply intertwined with its broader national security and geopolitical strategy. The country's &#8220;Generative AI Measures&#8221; law<a class="footnote-anchor" data-component-name="FootnoteAnchorToDOM" id="footnote-anchor-5" href="#footnote-5" target="_self">5</a>, introduced in 2023, form a key part of its broader AI governance, reinforcing state priorities while fostering AI development. Unlike Western models, where AI governance is often framed in terms of ethics and risk mitigation, China's regulations emphasize state control, national security, and social stability.</p><p>The Chinese approach reflects a fundamentally different view of technology governance - one where AI is positioned as both an economic driver and a tool for social management. This perspective is evident in the 2021 "Ethical Norms for the New Generation Artificial Intelligence" (Ethical Norms)<a class="footnote-anchor" data-component-name="FootnoteAnchorToDOM" id="footnote-anchor-6" href="#footnote-6" target="_self">6</a> and subsequent regulations, which balance technological advancement with political alignment. For cybersecurity professionals, this creates a distinct regulatory environment that differs markedly from Western frameworks. Similarities with EU and US in areas such as data privacy are there in the Chinese &#8220;Personal Information Protection Law&#8221; (PIPL) - a federal data privacy law targeted at personal information protection and addressing the problems with personal data leakage and has clear implications for automated decision-making technologies.</p><p>Some of the key regulatory aspects include mandatory algorithm registration, where companies developing AI models must register their algorithms with the Cyberspace Administration of China (CAC). This ensures that AI technologies align with state objectives and remain accessible for government oversight, which presents its own challenge to some western countries. Content security and model accountability are also critical components, as AI-driven cybersecurity tools must undergo rigorous content security assessments. Companies providing AI-based services are held accountable for the content their models generate and must take corrective actions if outputs violate state regulations. Additionally, AI security tools operating in China must comply with strict data localization laws, ensuring that sensitive data remains within national borders and is accessible for state review or investigations. Finally, China employs a tiered AI deployment and state influence approach, where AI applications in cybersecurity and other critical sectors receive state backing but are subject to heightened regulatory scrutiny, ensuring that they serve national security priorities.</p><p>The &#8220;Generative AI Measures&#8221; reflect China's dual mandate of promoting AI development while maintaining centralized oversight. These regulations require AI developers to not only prevent the generation of content that violates political, social, or moral guidelines but also to maintain transparency to ensure government oversight and to conduct security assessments for AI models that have public opinion attributes or social mobilization capabilities. This specifically impacts threat intelligence platforms and security monitoring tools that might analyse social media or public communications.</p><p>China's regulatory philosophy extends beyond individual laws to encompass its broader "New Generation Artificial Intelligence Development Plan," which aims to make China the global leader in AI by 2030. DeepSeek emerging to rival US GPT LLMs is an example of this strategy. This strategic initiative aligns AI development with national priorities through coordinated investment, talent development, and regulatory frameworks. For cybersecurity applications, this translates to preferential treatment for tools that enhance critical infrastructure protection, support state security objectives, and integrate with China's national cybersecurity strategy.</p><p>For multinational cybersecurity companies, the regulatory landscape creates significant operational challenges. Foreign firms must establish separate Chinese entities with localized data storage, undergo security reviews for cross-border data transfers, and potentially modify core algorithms to comply with registration requirements. These hurdles have led many international cybersecurity vendors to partner with Chinese firms rather than operate independently within the market.</p><p>While these regulations create challenges for foreign firms, they provide Chinese companies with a structured and predictable regulatory environment. Domestic AI leaders such as Baidu, Alibaba, and Tencent benefit from state guidance that shapes investment and research priorities, reinforcing China's AI leadership. In the cybersecurity domain specifically, companies like Qi An Xin, 360 Security, and Sangfor have flourished by developing AI-powered security solutions that align with both market demands and regulatory expectations.</p><h4>Singapore: Pragmatic AI Governance within a Regulatory Framework</h4><p>Singapore has established itself as a leader in AI governance through a sophisticated balance of innovation support and regulatory oversight. Unlike other global approaches, Singapore's model represents a distinctive third way that merits closer examination, particularly for its implications in the cybersecurity sector.</p><p>Singapore employs a pragmatic, business-friendly approach to AI regulation that differs significantly from frameworks like the EU AI Act. Rather than implementing standalone AI legislation, Singapore embeds AI governance within existing legal frameworks, creating a comprehensive but flexible regulatory environment. This distinctive approach allows the city-state to maintain regulatory oversight while fostering AI innovation.</p><p>At the core of Singapore's regulatory architecture lies the AI Governance Framework developed by the Infocomm Media Development Authority (IMDA)<a class="footnote-anchor" data-component-name="FootnoteAnchorToDOM" id="footnote-anchor-7" href="#footnote-7" target="_self">7</a>. This framework establishes foundational principles for transparency, accountability, and risk management without imposing rigid compliance requirements. Instead, AI governance is effectively implemented through existing legislation like the Personal Data Protection Act, which regulates AI systems processing personal data, and the Cybersecurity Act of 2018, which mandates security standards for critical infrastructure including AI systems.</p><p>This regulatory foundation is strengthened by sector-specific guidelines that address unique concerns in high-risk domains. In financial services, the Monetary Authority of Singapore has introduced the FEAT and Veritas Frameworks, establishing clear standards for AI fairness and transparency in financial decision-making<a class="footnote-anchor" data-component-name="FootnoteAnchorToDOM" id="footnote-anchor-8" href="#footnote-8" target="_self">8</a>. Similarly, the Ministry of Health's AI in Healthcare Guidelines ensure the safe deployment of medical AI applications, balancing innovation with patient safety<a class="footnote-anchor" data-component-name="FootnoteAnchorToDOM" id="footnote-anchor-9" href="#footnote-9" target="_self">9</a>.</p><p>What truly distinguishes Singapore's approach; however, is its emphasis on industry-led initiatives that complement formal regulation. Programs like AI Verify, a national testing and certification program, establish de facto standards for AI security applications without requiring legislative mandates. The AI Verify Foundation further enhances this ecosystem by developing open-source testing frameworks and ethical guidelines through collaborative industry participation. These initiatives are aligned with Singapore's National AI Strategy 2.0, which articulates a long-term vision for responsible AI integration across public and private sectors.</p><p>For cybersecurity applications specifically, Singapore's regulatory approach creates several distinct advantages. First, by leveraging existing regulatory frameworks rather than introducing entirely new compliance regimes, cybersecurity companies benefit from operational clarity and reduced regulatory uncertainty. Second, the absence of prescriptive AI-specific legislation allows for more adaptive cybersecurity solutions that can evolve with emerging threats. Third, certification programs like AI Verify create market differentiation opportunities for secure, ethical AI tools, helping companies build trust with customers. Finally, this balanced approach positions Singapore as an attractive hub for cybersecurity AI development, enhancing its international competitiveness.</p><p>Singapore's model represents a thoughtful middle path between the EU's comprehensive regulation and US fragmented approach. By focusing on sectoral compliance through existing legal structures, promoting voluntary best practices backed by certification programs, and maintaining high standards in critical domains while preserving flexibility elsewhere, Singapore has created an environment where AI-powered cybersecurity can thrive while maintaining public trust and ethical standards.</p><p>This nuanced approach demonstrates that effective AI governance need not come at the expense of innovation although one should not be lulled into false sense of believing there is a &#8220;laissez-faire&#8221; AI landscape in Singapore because there are firm regulatory frameworks that surround the seemingly voluntary AI Framework. The reflection I add here is that as other jurisdictions continue developing their AI regulatory frameworks, Singapore's model offers valuable lessons in achieving balance between oversight and growth, particularly for sensitive applications like cybersecurity where both innovation and trust are essential.</p><h4>Japan's Human-Centric Approach to AI Governance</h4><p>Japan has emerged as a distinctive voice in the global AI regulatory landscape, advocating for what it terms a "human-centric" approach to artificial intelligence. Unlike the comprehensive legislative frameworks emerging in the EU or the sector-specific regulations in the United States, Japan has deliberately chosen a path that emphasizes ethical principles and industry self-regulation while avoiding overly prescriptive legal mandates - i.e. a comprehensive and holistic approach. This approach reflects a consistent pattern in Japan's regulatory philosophy, echoing its response to financial governance challenges following the Sarbanes-Oxley Act in the United States. Where the U.S. implemented detailed procedural financial reporting requirements through SOX, Japan developed Naibutosei (&#20869;&#37096;&#32113;&#21046;) &#8211; internal control regulatory framework that addressed the broader operational processes that ultimately contribute to and supports financial reporting rather than simply prescribing specific finance control processes. This same holistic, principles-based approach now distinguishes Japan's AI governance framework.</p><p>At the core of Japan's approach are the "Social Principles of Human-Centric AI," developed by the Cabinet Office's Council for Social Principles of Human-Centric AI. These principles establish an ethical foundation cantered on human dignity, diversity and inclusion, and sustainability. Rather than creating binding legislation, Japan has focused on developing these principles as a framework that can guide both public and private sector AI development without stifling innovation through rigid compliance requirements.</p><p>This preference for flexible governance is further reflected in Japan's AI Strategy 2022<a class="footnote-anchor" data-component-name="FootnoteAnchorToDOM" id="footnote-anchor-10" href="#footnote-10" target="_self">10</a>, which prioritizes "AI for humanity" while simultaneously positioning Japan as a global leader in AI innovation. The strategy emphasizes three core pillars: human resource development, industrial competitiveness, and a sustainable society enabled by AI. Notably, the strategy contains minimal references to restrictive regulations, instead focusing on enablement and responsible development.</p><p>Japan's regulatory approach relies heavily on industry self-regulation and co-regulation models. The Japan Business Federation (Keidanren) has developed its own AI ethics guidelines that align with the government's social principles but provide industry-specific implementations<a class="footnote-anchor" data-component-name="FootnoteAnchorToDOM" id="footnote-anchor-11" href="#footnote-11" target="_self">11</a>. This collaborative approach between government and industry creates a dynamic regulatory environment where best practices can evolve alongside technological advancements without awaiting legislative changes.</p><p>Another distinctive feature of Japan's framework is its holistic, cross-sectoral approach to AI governance. Unlike the United States, where AI regulation tends to fragment along existing agency jurisdictions, Japan applies consistent principles across different industries while allowing for contextual adaptation. This approach reduces regulatory complexity for companies developing AI solutions that span multiple sectors - a particular advantage for cybersecurity applications that often need to operate across domain boundaries.</p><p>For cybersecurity specifically, Japan's model creates several beneficial conditions. AI-powered security tools operate under broader information security laws rather than AI-specific constraints, allowing for greater adaptability in responding to emerging threats. The Ministry of Economy, Trade and Industry (METI) has issued guidelines for AI security that emphasize risk management and transparency without mandating specific technical approaches. Meanwhile, critical infrastructure protection incorporates AI security considerations through Japan's cybersecurity strategy rather than through separate AI legislation.</p><p>The implications for international security cooperation are significant as well. Japan has actively engaged in international AI governance forums, including the Global Partnership on AI and OECD AI initiatives, advocating for interoperable standards that facilitate cross-border security collaboration. This approach aligns with Japan's broader diplomatic strategy of promoting "Data Free Flow with Trust" (DFFT), which seeks to balance data protection with the free flow of information necessary for effective global cybersecurity.</p><p>While Japan's approach shares some similarities with Singapore's pragmatic model, it places even greater emphasis on ethical principles and less on formal regulatory structures. This distinction reflects Japan's cultural preference for consensus-building and social harmony over strict legal enforcement. However, both approaches contrast sharply with China's highly interventionist AI regulations, which impose significant state oversight and control over AI applications, particularly those related to national security.</p><p>As Japan continues to refine its AI governance framework, it maintains a careful balance between encouraging innovation in critical areas like cybersecurity while ensuring ethical considerations remain central to AI development. This human-centric approach positions Japan as an important counterpoint in global AI governance discussions, demonstrating that effective oversight need not rely primarily on prescriptive regulation. For cybersecurity applications especially, this flexible, principles-based approach may prove particularly valuable in addressing rapidly evolving threats without regulatory constraints that could impede responsive innovation.</p><h3>South Korea's Balanced Approach to AI Governance: Innovation with Oversight</h3><p>South Korea has established a distinctive approach to AI governance through its recently enacted 'Act on the Development of AI and Establishment of Trust' (AI Basic Act), passed in December 2024<a class="footnote-anchor" data-component-name="FootnoteAnchorToDOM" id="footnote-anchor-12" href="#footnote-12" target="_self">12</a>. South Korea's approach reveals a nuanced framework that balances regulatory oversight with strong support for innovation - a model that differs significantly from both the EU's restrictive stance and Singapore's light(er)-touch approach.</p><p>The AI Basic Act consolidates 19 different regulatory proposals into a cohesive framework that prioritizes national competitiveness alongside responsible AI development. Unlike the EU AI Act, which focuses primarily on risk mitigation through extensive pre-market obligations, South Korea's law integrates regulatory governance with industrial growth strategy and emphasizes post-market oversight. This represents a fundamental difference in philosophy: where the EU sees regulation primarily as a means to control risks, South Korea views it as one component of a broader strategy to promote AI advancement.</p><p>A defining characteristic of South Korea's approach is its scope and application. The AI Basic Act applies to developers and entities offering AI products and services, but unlike the EU's framework, it does not extend to users of AI systems. This distinction significantly reduces the regulatory burden across the AI ecosystem and reflects a targeted approach to oversight. The law also avoids the controversial aspects of regulating general-purpose AI systems that have caused considerable debate in other jurisdictions.</p><p>South Korea's framework introduces the concept of "high-impact AI" - systems that may affect human life, physical safety, and fundamental rights in specific sectors such as energy, healthcare, transportation, and education. Not unlike the human considerations that goes into Japan&#8217;s model. And in contrast to the EU's mandatory conformity assessments for high-risk systems, the Korean law states that providers of high-impact AI should "endeavour to obtain inspection and certification in advance." This creates a more flexible framework that encourages rather than mandates specific certification processes.</p><p>The institutional architecture supporting the AI Basic Act further demonstrates South Korea's balanced approach. The law establishes several new bodies which are tasked not only with regulatory oversight but also with developing R&amp;D strategies, investment frameworks, and international cooperation initiatives. The National AI Committee, which is one of the oversight bodies, explicitly includes competitiveness enhancement among its core responsibilities, highlighting how economic considerations are integrated directly into the governance structure.</p><p>For cybersecurity applications specifically, this regulatory environment creates opportunities rather than constraints. AI-powered security tools are subject to the general provisions of the AI Basic Act, but the emphasis on post-market oversight allows for greater flexibility in development and deployment compared to more prescriptive regulatory regimes. The law encourages transparency and risk management without imposing rigid technical requirements that might impede innovation in this rapidly evolving domain.</p><p>Perhaps most tellingly, the AI Basic Act mandates a regular review of its provisions and continuous benchmarking against international standards. This built-in adaptability reflects South Korea's recognition of the rapidly evolving nature of AI technology and governance practices. Unlike more rigid regulatory frameworks, the Korean approach allows for ongoing refinement as the technology matures and its implications become clearer.</p><p>South Korea has created a governance model that addresses legitimate concerns about AI risks while maintaining the flexibility needed for continued technological advancement. For the global AI governance landscape, South Korea's approach offers an important middle path&#8212;one that recognizes the need for oversight without presuming that extensive pre-market regulation is the only way to ensure responsible AI development.</p><h2>A Global Tapestry of Approaches</h2><p>The journey through AI security regulations across Asia presents unique challenges for global cybersecurity teams. Unlike the EU's uniform regulations or the US's fragmented state-level approach, Asia's regulatory landscape requires tailored compliance strategies for each jurisdiction.</p><p>What are some of the key take-aways from this?</p><ul><li><p><strong>Data Sovereignty Conflicts</strong>: AI cybersecurity solutions that comply with Singaporean or Japanese standards may require fundamental redesigns for deployment in mainland China due to strict data localization laws and algorithm registration requirements.</p></li><li><p><strong>Ambiguous Chinese Regulations</strong>: Unlike the EU's clearly defined prohibited AI categories, China's regulatory framework includes deliberately vague provisions that allow authorities flexibility in enforcement, creating uncertainty for foreign firms.</p></li><li><p><strong>Diverging Security Architectures</strong>: Security AI systems may need separate regional models to comply with local regulations, increasing operational complexity and technical debt while potentially fragmenting threat intelligence capabilities.</p></li><li><p><strong>Compliance vs. Security Trade-Offs</strong>: Strict compliance in South Korea or China may limit the deployment of certain AI-powered cybersecurity tools, potentially leaving gaps in regional security postures compared to operations in less restrictive markets.</p></li></ul><p>The global AI regulatory landscape presents a complex mosaic of approaches reflecting diverse national priorities and governance philosophies. From the EU's comprehensive risk-based framework to the US's fragmented sectoral model, from China's state-directed control to the varied strategies across Singapore, Japan, and South Korea, organizations face a challenging compliance environment that varies dramatically by region.</p><p>These regulatory differences aren't merely administrative hurdles - they fundamentally shape how AI-powered cybersecurity tools can be developed, deployed, and operated across borders. The tension between innovation and control, between security imperatives and compliance requirements, creates profound strategic considerations for organizations operating globally. Practically for multinational companies, this means careful consideration for developing, using or hosting AI solution in different countries or regions. Over time, these frameworks will likely converge but for the immediate future, careful technology strategies must be devised that factor in these regulatory differences.</p><p>In the next article, we'll examine the some of the practical implications of these diverse regulatory frameworks for cybersecurity teams. How do these varying approaches affect security architectures and risk management? What trade-offs must security leaders make when balancing compliance with effective threat detection? And perhaps most importantly, how can organizations transform compliance challenges into strategic advantages?</p><p>I welcome your thoughts on how these emerging regulatory frameworks are affecting your organization's approach to AI in cybersecurity. What challenges are you facing, and which regulatory model seems most conducive to effective security operations?</p><p></p><p></p><div class="footnote" data-component-name="FootnoteToDOM"><a id="footnote-1" href="#footnote-anchor-1" class="footnote-number" contenteditable="false" target="_self">1</a><div class="footnote-content"><p>EU AI Act, European Commission - https://digital-strategy.ec.europa.eu/en/policies/regulatory-framework-ai</p></div></div><div class="footnote" data-component-name="FootnoteToDOM"><a id="footnote-2" href="#footnote-anchor-2" class="footnote-number" contenteditable="false" target="_self">2</a><div class="footnote-content"><p>https://artificialintelligenceact.eu/high-level-summary/</p></div></div><div class="footnote" data-component-name="FootnoteToDOM"><a id="footnote-3" href="#footnote-anchor-3" class="footnote-number" contenteditable="false" target="_self">3</a><div class="footnote-content"><p>https://www.businessinsider.com/yann-lecun-meta-trump-academia-witch-hunt-musk-ai-2025</p></div></div><div class="footnote" data-component-name="FootnoteToDOM"><a id="footnote-4" href="#footnote-anchor-4" class="footnote-number" contenteditable="false" target="_self">4</a><div class="footnote-content"><p>https://www.multistate.ai/updates/vol-52</p></div></div><div class="footnote" data-component-name="FootnoteToDOM"><a id="footnote-5" href="#footnote-anchor-5" class="footnote-number" contenteditable="false" target="_self">5</a><div class="footnote-content"><p><a href="https://www.cac.gov.cn/2023-07/13/c_1690898327029107.htm">https://www.cac.gov.cn/2023-07/13/c_1690898327029107.htm</a></p></div></div><div class="footnote" data-component-name="FootnoteToDOM"><a id="footnote-6" href="#footnote-anchor-6" class="footnote-number" contenteditable="false" target="_self">6</a><div class="footnote-content"><p>Translated document by Georgetown University CSET https://cset.georgetown.edu/wp-content/uploads/t0400_AI_ethical_norms_EN.pdf</p></div></div><div class="footnote" data-component-name="FootnoteToDOM"><a id="footnote-7" href="#footnote-anchor-7" class="footnote-number" contenteditable="false" target="_self">7</a><div class="footnote-content"><p>https://www.pdpc.gov.sg/-/media/Files/PDPC/PDF-Files/Resource-for-Organisation/AI/SGModelAIGovFramework2.pdf</p></div></div><div class="footnote" data-component-name="FootnoteToDOM"><a id="footnote-8" href="#footnote-anchor-8" class="footnote-number" contenteditable="false" target="_self">8</a><div class="footnote-content"><p>https://www.mas.gov.sg/~/media/MAS/News%20and%20Publications/Monographs%20and%20Information%20Papers/FEAT%20Principles%20Final.pdf</p></div></div><div class="footnote" data-component-name="FootnoteToDOM"><a id="footnote-9" href="#footnote-anchor-9" class="footnote-number" contenteditable="false" target="_self">9</a><div class="footnote-content"><p>https://isomer-user-content.by.gov.sg/3/9c0db09d-104c-48af-87c9-17e01695c67c/1-0-artificial-in-healthcare-guidelines-(aihgle)_publishedoct21.pdf</p></div></div><div class="footnote" data-component-name="FootnoteToDOM"><a id="footnote-10" href="#footnote-anchor-10" class="footnote-number" contenteditable="false" target="_self">10</a><div class="footnote-content"><p>https://www8.cao.go.jp/cstp/ai/aistrategy2022_honbun.pdf &#8211; in English: https://www8.cao.go.jp/cstp/ai/aistratagy2022en.pdf</p></div></div><div class="footnote" data-component-name="FootnoteToDOM"><a id="footnote-11" href="#footnote-anchor-11" class="footnote-number" contenteditable="false" target="_self">11</a><div class="footnote-content"><p>https://www.keidanren.or.jp/en/policy/2023/041.html</p></div></div><div class="footnote" data-component-name="FootnoteToDOM"><a id="footnote-12" href="#footnote-anchor-12" class="footnote-number" contenteditable="false" target="_self">12</a><div class="footnote-content"><p>https://ecipe.org/blog/koreas-new-ai-law-not-brussels-progeny</p></div></div>]]></content:encoded></item><item><title><![CDATA[The New Battlefield: AI in Cyber Attacks and Cyber Defence]]></title><description><![CDATA[In less than a year, the world has witnessed a stunning acceleration in the adoption and sophistication of artificial intelligence (AI).]]></description><link>https://www.tekk-talk.com/p/the-new-battlefield-ai-in-cyber-attacks</link><guid isPermaLink="false">https://www.tekk-talk.com/p/the-new-battlefield-ai-in-cyber-attacks</guid><dc:creator><![CDATA[Dennis Lindwall]]></dc:creator><pubDate>Mon, 10 Mar 2025 15:05:50 GMT</pubDate><enclosure url="https://substackcdn.com/image/fetch/$s_!NogS!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F6d13071b-422f-4b8e-8f02-011c374fa08d_1792x1024.webp" length="0" type="image/jpeg"/><content:encoded><![CDATA[<div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!NogS!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F6d13071b-422f-4b8e-8f02-011c374fa08d_1792x1024.webp" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!NogS!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F6d13071b-422f-4b8e-8f02-011c374fa08d_1792x1024.webp 424w, https://substackcdn.com/image/fetch/$s_!NogS!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F6d13071b-422f-4b8e-8f02-011c374fa08d_1792x1024.webp 848w, https://substackcdn.com/image/fetch/$s_!NogS!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F6d13071b-422f-4b8e-8f02-011c374fa08d_1792x1024.webp 1272w, https://substackcdn.com/image/fetch/$s_!NogS!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F6d13071b-422f-4b8e-8f02-011c374fa08d_1792x1024.webp 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!NogS!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F6d13071b-422f-4b8e-8f02-011c374fa08d_1792x1024.webp" width="1456" height="832" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/6d13071b-422f-4b8e-8f02-011c374fa08d_1792x1024.webp&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:832,&quot;width&quot;:1456,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:537810,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:&quot;image/webp&quot;,&quot;href&quot;:null,&quot;belowTheFold&quot;:false,&quot;topImage&quot;:true,&quot;internalRedirect&quot;:&quot;https://tekk.substack.com/i/158773122?img=https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F6d13071b-422f-4b8e-8f02-011c374fa08d_1792x1024.webp&quot;,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!NogS!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F6d13071b-422f-4b8e-8f02-011c374fa08d_1792x1024.webp 424w, https://substackcdn.com/image/fetch/$s_!NogS!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F6d13071b-422f-4b8e-8f02-011c374fa08d_1792x1024.webp 848w, https://substackcdn.com/image/fetch/$s_!NogS!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F6d13071b-422f-4b8e-8f02-011c374fa08d_1792x1024.webp 1272w, https://substackcdn.com/image/fetch/$s_!NogS!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F6d13071b-422f-4b8e-8f02-011c374fa08d_1792x1024.webp 1456w" sizes="100vw" fetchpriority="high"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a></figure></div><p>In less than a year, the world has witnessed a stunning acceleration in the adoption and sophistication of artificial intelligence (AI). From a cybersecurity perspective, this transformation is both awe-inspiring and concerning. In this short series, I&#8217;ll explore  some of the key aspects of this shift and its impact on cybersecurity. What was cutting-edge yesterday is now routine, and new breakthroughs are emerging faster than most organizations can adapt. AI isn&#8217;t just another technological advancement - it&#8217;s a fundamental turning point, a fork in the road that will shape cybersecurity operations in ways more profound than many of its predecessors. I am of course talking about generative AI, large language models and advanced machine learning.</p><p>Since the advent of publicly accessible large language models (LLM) and generative AI (GenAI) (arguably the watershed moment occurred in Nov 2022 with OpenAI&#8217;s introduction of ChatGPT 3.5 to the general public) I have been closely monitoring what has been taking place. I recall the first time I heard of &#8220;large language models&#8221; was in early 2020, shortly after OpenAI had opened up their GPT 2.0, 1.5B parameter model to the public in November 2019.</p><p>OpenAI's approach to releasing their GPT models demonstrated remarkable foresight about this technology's potential for misuse. In February 2019, after careful deliberation, OpenAI took a cautious approach with GPT-2 by initially releasing only their smallest 124M parameter version to the public, gradually expanding access to more advanced versions throughout 2019. This strategic release marked a significant shift, making powerful AI technology accessible beyond just research institutions and tech enthusiasts. Despite these precautions, most of OpenAI's feared misuse scenarios have materialized today. The guardrails and limitations now present in publicly available models are a case of closing the barn door after the horse has bolted.</p><p>For cybersecurity professionals, this history offers both perspective and urgency. We can appreciate the dramatic evolution in capabilities&#8212;from GPT-2 (comparable to a high school student's abilities) to today's GPT-4o (functioning more like a university professor). Of course, this comparison is relative&#8212;two years from now, we may well look back at GPT-4o as we do an old "Model-T Ford" today. This extraordinary pace of advancement underscores our critical mandate: we must simultaneously anticipate current risks while proactively reinforcing our defences to adapt to the rapidly evolving landscape of AI-driven security threats, capabilities, and tools.</p><p>Around six months ago, I first drafted an analysis of AI's role in cybersecurity, and as I realised that this whitepaper was never been published, I decided to take a step back to reassess where we are today. Just six months later, the AI landscape has transformed our profession at a breath-taking pace. What seemed cutting-edge last year has become table stakes, and new frontiers have emerged that demand our attention. Almost like Moore&#8217;s Law for the evolution of computer chips, IT professionals need to adopt a highly dynamic approach to security; akin to being in a marathon where running fast simply means keeping up with the others. Standing still and waiting for clarity should not be on any company&#8217;s strategic agenda.</p><p>This first instalment of my multi-part article series examines how AI has reshaped cyber threats and defences, exploring the forces driving these swift changes and laying the groundwork for deeper dives into the evolving regulatory, technological, and strategic landscape. Given the extraordinary pace of advancement we're witnessing, I fully expect to be rewriting this entire analysis by 2026 - a testament to both the challenge and excitement of operating at the frontier of AI and cybersecurity.</p><p>After closely monitoring and analysing the developments in AI over the past year, I've identified three critical observations that characterize the current AI cybersecurity landscape. These observations reflect not just incremental changes but fundamental shifts that are redefining our approach to digital security.</p><h2><strong>Observation 1: Acceleration of AI Capabilities - A Double-Edged Sword</strong></h2><p>Perhaps the most striking shift in recent months is the exponential growth in AI capabilities, fuelled by ever-more-powerful large language models (LLMs) and specialized machine learning frameworks. Security-focused variants of these models are popping up on both sides of the fence:</p><ul><li><p><strong>Threat Actor Tooling</strong><br>Criminal groups now have ready access to off-the-shelf and bespoke AI kits that automate reconnaissance, vulnerability scanning, exploit generation, reverse engineering, and lateral movement across a target environment. What used to require specialized expertise can now be accomplished by operators with minimal technical background. The learning curve for sophisticated attacks has plummeted, and the pace of new threat activity has soared.</p></li><li><p><strong>Democratization of defence</strong><br>On the flip side, security teams can harness commercial AI platforms to bolster their defences. From real-time anomaly detection and automated patch prioritization to advanced forensic analysis, AI offers new ways to stay ahead of adversaries. The ability to process millions of signals within seconds - combining diverse data streams like threat intelligence feeds, network logs, and user behavioural profiles - can dramatically shorten the window of opportunity for attackers.</p></li></ul><p>This <strong>democratization of AI</strong> is simultaneously an equalizer and an amplifier: smaller organizations gain access to enterprise-grade protections, while lesser-skilled attackers are supercharged by cutting-edge tools. The net result is an arms race that shows no signs of slowing.</p><h2><strong>Observation 2: Transformation of Threat Profiles - New Classes of AI-Driven Attacks</strong></h2><p>While automated attacks and AI-powered phishing kits were already on the rise, new developments enabled by this new technology are fundamentally changing the threat landscape, capturing the attention of security experts Here are three that keep me awake at night:</p><ul><li><p><strong>Autonomous Attack Platforms:</strong> We're witnessing the emergence of fully autonomous offensive systems that identify vulnerabilities, pivot around network defences, and tailor exploits in near real-time. These platforms leverage machine learning to analyse target environments, prioritize high-value assets, and orchestrate multi-stage attacks without human intervention. Traditional "patch and protect" paradigms become increasingly inadequate when facing adversaries that adapt on the fly at speeds no human defender can match.</p></li><li><p><strong>Hyper-realistic Social Engineering:</strong> Through high-quality voice cloning and convincing deepfakes, AI has made distinguishing real from synthetic nearly impossible. Modern attacks analyse targets' digital footprints to craft personalized deception strategies tailored to an individual's role, communication style, and context. These attacks can mimic writing styles and conversational patterns of trusted contacts, significantly boosting success rates and adapting in real-time to overcome suspicion.</p></li><li><p><strong>Evasive Malware:</strong> AI-enhanced malware represents a significant evolution in the threat landscape, using machine learning to dynamically adapt its behaviour, signatures, and execution patterns. These threats modify themselves to evade detection, leverage legitimate system tools, and employ polymorphic capabilities that render traditional signature-based defences obsolete. The asymmetric advantage for attackers requires defenders to implement more sophisticated behavioural analysis, AI-powered security systems, and multi-layered defence strategies.</p></li></ul><p>For many security teams, these threats push the envelope of what was previously considered possible. The focus is no longer just on patching systems but on staying ahead of an adversary that can continuously morph and evolve at high speed - currently faster than our &#8220;default&#8221; IT operations are able to patch and risk mitigate. In this rapidly evolving threat landscape, strong security fundamentals become more critical than ever; rigorous access controls, comprehensive asset inventory, regular security awareness training, and diligent patch management form the essential foundation.</p><p>Moreover, traditional defences like signature-based antivirus solutions are increasingly inadequate on their own. Organizations must pivot toward advanced endpoint detection and response (EDR or XDR) platforms with behavioural analysis capabilities, AI-powered security operations centres, and zero-trust architectures that assume breach and verify continuously. The most effective defence strategies will combine human expertise with advanced technological countermeasures to detect, respond to, and mitigate these increasingly sophisticated AI-driven threats.</p><h2><strong>Observation 3. The Evolution of Defensive Capabilities: AI-Powered Countermeasures</strong></h2><p>Despite the grim picture painted above, defenders have not been idle. AI-based solutions are rapidly changing the way organizations approach security operations, incident response, and long-term risk management. At the bleeding edge, three areas stand out:</p><ul><li><p><strong>Predictive Threat Intelligence</strong><br>Modern <strong>AI-driven threat intelligence platforms</strong> scavenge the clear, deep, and dark web, identifying malicious chatter, new exploit frameworks, and emerging attacker techniques. By analysing threat signals in real time and correlating them with known vulnerabilities, these systems can forecast impending attacks days or weeks before they become mainstream - a critical edge in proactive defence.</p></li><li><p><strong>Self-Healing Systems</strong><br>The concept of <strong>adaptive, self-healing architectures</strong> has evolved from theory to practice. Rather than waiting for human intervention, these AI-augmented environments automatically isolate compromised endpoints, spin up temporary decoys to divert adversaries, and even initiate patching or configuration changes to harden vulnerabilities. All of this happens while business services remain operational, minimizing disruption.</p></li><li><p><strong>AI-Human Teaming</strong><br>Beyond pure automation, the most successful defensive strategies hinge on <strong>collaboration between AI and human analysts (Human-in-The-Loop)</strong>. Humans provide the creativity, contextual understanding, and ethical oversight, while AI excels at pattern recognition, large-scale data crunching, and lightning-fast responses. In many Security Operations Centres (SOCs), human experts now handle strategic judgments and incident prioritization after AI has filtered and categorized the tidal wave of alerts.</p></li></ul><p>The evolution of defensive capabilities shows promise, but this is just the beginning of a profound transformation in the cybersecurity landscape. As AI continues to reshape both attack and defence strategies, security professionals must develop a comprehensive understanding of these changes across multiple domains.</p><h2><strong>Preview of Series Topics</strong></h2><p>This article sets the stage for a deeper dive into how AI is fundamentally altering both the <strong>technical mechanics</strong> and the <strong>strategic decision-making</strong> behind cybersecurity. In subsequent instalments of this series, we will explore</p><ol><li><p><strong>Regulatory &amp; Compliance Realities</strong><br>Navigating the emerging legal frameworks for AI&#8212;such as the EU AI Act&#8212;and how they impact both defensive and offensive security measures.</p></li><li><p><strong>Advanced Tactics in the AI Arms Race</strong><br>An up-to-date look at cutting-edge attack techniques and how defenders can leverage adversarial machine learning, AI Red Teams, and more.</p></li><li><p><strong>AI-Driven Identity &amp; Zero Trust</strong><br>The role of AI in continuous authentication, biometric security, and dynamic micro-segmentation.</p></li><li><p><strong>Futureproofing for the Next Wave</strong><br>From quantum computing to AI supply chain security, we&#8217;ll examine looming trends that demand proactive strategizing.</p></li></ol><p>Don&#8217;t see this list as definitive or exhaustive as I have come to realise that as one article is written, there are new topics I need to address.</p><p>By staying current on the <strong>rapid AI advancements</strong> and understanding the <strong>shifting threat profiles</strong>, cybersecurity leaders can make informed choices about defence investments, staffing, and strategic policy. As the battlefield continues to evolve, the organizations most likely to thrive are those that blend <strong>innovative AI capabilities</strong> with seasoned human expertise. We must never losing sight of the fact that, while AI can automate processes, it&#8217;s the human element that provides context, ethics, and the final critical judgment.</p><h2><strong>Conclusion and Invitation</strong></h2><p>The interplay of AI and cybersecurity is reshaping everything from threat hunting to compliance. Attackers and defenders alike are locked in a cycle of continuous innovation, and the stakes have never been higher. In this new era, success belongs to those who actively leverage AI as a force multiplier while recognizing its limitations and potential pitfalls. Failing to integrate AI-driven security today means falling behind tomorrow. Although sounding like a cheap slogan, the time to act really is now.</p><p>In the next article, we&#8217;ll examine how emerging regulatory standards are imposing new obligations - and sometimes new opportunities - for organizations integrating AI into their security operations and tooling. Until then, I&#8217;d love to hear from you: Which aspects of AI-driven threats and defences are you most concerned about? Share your perspectives in the comments, and let&#8217;s keep the conversation going.</p>]]></content:encoded></item><item><title><![CDATA[Third-Party Risk: The Cybersecurity Blind Spot You Can't Ignore]]></title><description><![CDATA[Bridging the Gap Between Procurement, IT, and Cybersecurity for Comprehensive Risk Management]]></description><link>https://www.tekk-talk.com/p/third-party-risk-is-a-cybersecurity-blind-spot</link><guid isPermaLink="false">https://www.tekk-talk.com/p/third-party-risk-is-a-cybersecurity-blind-spot</guid><dc:creator><![CDATA[Dennis Lindwall]]></dc:creator><pubDate>Mon, 23 Sep 2024 22:51:22 GMT</pubDate><enclosure url="https://substackcdn.com/image/fetch/$s_!eX3M!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F40b59cae-5f5b-4064-b091-558d05622d6f_1792x1024.webp" length="0" type="image/jpeg"/><content:encoded><![CDATA[<div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!eX3M!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F40b59cae-5f5b-4064-b091-558d05622d6f_1792x1024.webp" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!eX3M!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F40b59cae-5f5b-4064-b091-558d05622d6f_1792x1024.webp 424w, https://substackcdn.com/image/fetch/$s_!eX3M!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F40b59cae-5f5b-4064-b091-558d05622d6f_1792x1024.webp 848w, https://substackcdn.com/image/fetch/$s_!eX3M!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F40b59cae-5f5b-4064-b091-558d05622d6f_1792x1024.webp 1272w, https://substackcdn.com/image/fetch/$s_!eX3M!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F40b59cae-5f5b-4064-b091-558d05622d6f_1792x1024.webp 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!eX3M!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F40b59cae-5f5b-4064-b091-558d05622d6f_1792x1024.webp" width="1456" height="832" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/40b59cae-5f5b-4064-b091-558d05622d6f_1792x1024.webp&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:832,&quot;width&quot;:1456,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:692196,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:&quot;image/webp&quot;,&quot;href&quot;:null,&quot;belowTheFold&quot;:false,&quot;topImage&quot;:true,&quot;internalRedirect&quot;:null,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!eX3M!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F40b59cae-5f5b-4064-b091-558d05622d6f_1792x1024.webp 424w, https://substackcdn.com/image/fetch/$s_!eX3M!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F40b59cae-5f5b-4064-b091-558d05622d6f_1792x1024.webp 848w, https://substackcdn.com/image/fetch/$s_!eX3M!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F40b59cae-5f5b-4064-b091-558d05622d6f_1792x1024.webp 1272w, https://substackcdn.com/image/fetch/$s_!eX3M!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F40b59cae-5f5b-4064-b091-558d05622d6f_1792x1024.webp 1456w" sizes="100vw" fetchpriority="high"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a></figure></div><p>Cybersecurity is today one of the fastest evolving sectors in Technology, and those of us who have weathered the journey in Cyber over the past 10 years or more can attest to the fact that the journey has been more of a sprint than a steady-paced evolution.</p><p>Companies of all sizes face unprecedented challenges in safeguarding data and operations; none are spared the scrutiny of threat actors and their tooling. Organisations increasingly rely on third-party vendors, SaaS solutions, cloud services, and extended supply chains to drive efficiency and innovation, which means that the attack surface has expanded exponentially. This (r)evolution has left many companies vulnerable to threats that originate not within their own systems and infrastructure but through third-party and supply chain networks - vulnerabilities that easily cascade throughout interconnected systems, wreaking havoc like an avalanche.</p><div class="subscription-widget-wrap-editor" data-attrs="{&quot;url&quot;:&quot;https://www.tekk-talk.com/subscribe?&quot;,&quot;text&quot;:&quot;Subscribe&quot;,&quot;language&quot;:&quot;en&quot;}" data-component-name="SubscribeWidgetToDOM"><div class="subscription-widget show-subscribe"><div class="preamble"><p class="cta-caption">Thanks for reading TEKK Talk! Subscribe for free to receive new posts and support my work.</p></div><form class="subscription-widget-subscribe"><input type="email" class="email-input" name="email" placeholder="Type your email&#8230;" tabindex="-1"><input type="submit" class="button primary" value="Subscribe"><div class="fake-input-wrapper"><div class="fake-input"></div><div class="fake-button"></div></div></form></div></div><p>The infamous breaches at Kaseya and SolarWinds stand as stark reminders of how a single weak point in the supply chain can paralyse organisations globally. And as the 2024 CrowdStrike-induced global system outage reminded us, we're not only subjected to the risk from malicious actors; sometimes, accidental misconfigurations and updates from trusted security applications create the same supply chain risks.</p><div class="pullquote"><p>"Vulnerabilities in one part of the supply chain can quickly spread to other areas, amplifying the impact of any given breach."</p></div><p>These incidents have highlighted a growing gap in how enterprises manage cybersecurity risks, particularly when it comes to the involvement of third parties. The 'risk coverage gap' - <em>a term now gaining traction</em> - refers to the misalignment between an enterprise's internal security controls and the external risks posed by vendors and supply chain partners. Addressing this gap requires a rethinking of enterprise risk frameworks and a more integrated approach to cybersecurity.</p><h2><strong>The Risk Coverage Gap: An Unintended Consequence of Fragmented Risk Frameworks</strong></h2><p>At the heart of this issue lies the fragmented nature of traditional enterprise risk management frameworks. These frameworks are often designed to identify and mitigate a wide range of risks across various functions, from financial to operational to compliance risks. However, in practice, the control environments supporting these frameworks tend to be disjointed, with different departments operating under independent control frameworks. Each function - whether procurement, IT, or cybersecurity - focuses on its own set of priorities, inadvertently creating gaps in the organisation's overall risk coverage.</p><p>For example, while procurement teams may be focused on financial and contractual risks associated with vendors, they often overlook critical cybersecurity considerations. Similarly, IT operations might prioritise system performance and availability, leaving cybersecurity teams to focus on direct threats to the organisation. These silos create blind spots where risks are either inadequately managed or missed altogether. This is particularly concerning when it comes to third-party risks, where vulnerabilities in one vendor's system can have far-reaching consequences throughout the supply chain.</p><h3><strong>A Growing Reliance on Third Parties</strong></h3><p>The reliance on third parties to deliver key business services is growing exponentially. As organisations outsource more of their operations - whether for cost savings, scalability, or access to specialised expertise - they become increasingly dependent on the security practices of these external vendors. Yet, many enterprises still approach third-party risk as a sub-function of Third Party Management (TPM) in a legal context and not in the context of the risk that these third parties introduce to the organisation. This is often exacerbated by the fact that the TPM process then delegates responsibilities to IT, Operations and Procurement, all governed by independent policies and procedures that are not fully integrated into the broader risk management framework.</p><p>This fragmented approach leads to inconsistent risk assessments and incomplete visibility into how third-party risks affect the organisation's overall security posture. Without a unified framework to manage these risks, organisations often fail to account for the compounded effects of multiple vulnerabilities and security risks across their vendor networks, leaving them exposed to potential security breaches, data loss, or operational disruptions which could have been mitigated through a more cohesive risk management approach.</p><h3><strong>The Compounding Nature of Cyber Risk</strong></h3><p>Complicating matters further is the fact that cybersecurity risks are rarely isolated. The interconnected nature of modern enterprise ecosystems means that vulnerabilities in one part of the supply chain can quickly spread to other areas, amplifying the impact of any given breach. Threat actors increasingly exploit these weak links, targeting not the organisation itself but also its suppliers, partners, or service providers to gain entry. By penetrating the organisation's perimeter through a trusted relationship in the supply chain, it allows them to bypass traditional security measures and compromise the entire network.</p><div class="pullquote"><p>&#8220;By breaking down silos and integrating third-party risk management into the broader enterprise risk framework, organisations can close the risk coverage gap and build a more resilient cybersecurity posture.&#8221;</p></div><p>The Kaseya and SolarWinds breaches are prime examples of this phenomenon. In both cases, attackers leveraged vulnerabilities in third-party systems to gain access to larger networks, causing widespread damage. These incidents underscore the importance of addressing the risk coverage gap, as the traditional 'perimeter defence' model is no longer sufficient to protect enterprises from increasingly sophisticated supply chain attacks.</p><h2><strong>Closing the Gap: Toward a More Integrated Risk Management Approach</strong></h2><p>To effectively manage third-party cybersecurity risks, we can see that organisations must rethink their approach to risk management. A more integrated, cohesive framework is needed; one that aligns the risk management efforts of procurement, IT operations, and cybersecurity to create a unified defence against external threats. This, in theory, sounds easy but is hard to achieve unless we change the way we perceive risk.</p><p>First and foremost, enterprises should ensure that cybersecurity considerations are embedded from the very beginning of the third-party relationship. This means not only evaluating vendors based on cost and service delivery but also assessing their cybersecurity posture and requiring regular security reviews throughout the relationship. Additionally, enterprises must adopt continuous monitoring strategies that provide real-time visibility into vendor activities, allowing for early detection of potential vulnerabilities or breaches. This necessarily also means that Cybersecurity is empowered (legally, through contractual clauses) to engage with, including red-lining engagements for termination, in cases where critical deficiencies or vulnerabilities aren't remediated within agreed SLAs by the third-party vendor.</p><p>Another key step is fostering greater collaboration across departments. Procurement, IT, and cybersecurity teams must work together to identify cross-functional risks and develop coordinated control and response plans. By breaking down silos and integrating third-party risk management into the broader enterprise risk framework, organisations can close the risk coverage gap and build a more resilient cybersecurity posture.</p><p>The risk coverage gap is a significant and growing threat to enterprise security. As organisations continue to rely on third parties for critical business functions, the need for a more integrated, comprehensive approach to managing third-party and supply chain risks has never been more urgent. By focusing on this gap, redefining the end-to-end TPM process, and fostering cross-departmental collaboration, organisations can better protect themselves against these types of supply chain risks as well as becoming better prepared to counter the evolving threat landscape and embed long-term operational resilience.</p><p>Here are some more details on the breaches I mentioned earlier:</p><ul><li><p>Kaseya: <a href="https://www.wired.com/story/revil-ransomware-supply-chain-technique/">Revil: Ransomware supply-chain technique [Wired]</a></p></li><li><p>SolarWinds: <a href="https://www.wired.com/story/the-untold-story-of-solarwinds-the-boldest-supply-chain-hack-ever/">The untold story of SolarWinds [Wired]</a></p></li><li><p>The Intricate Web of Third-Party Cybersecurity Risk (<a href="https://www.isacajournal-digital.org/isacajournal/2023_volume_6/MobilePagedArticle.action?articleId=1927161#articleId1927161">ISACA</a>)</p></li></ul><p>This is the first of a series of posts that focus on the how we can rebuild and integrate more efficient third-party risk management across the organisation. Keep a look out for additional posts.</p><p>In the meanwhile, you might also like this:</p><div class="digest-post-embed" data-attrs="{&quot;nodeId&quot;:&quot;ba7325fb-b325-48d4-831f-1373d49453d5&quot;,&quot;caption&quot;:&quot;As the world continues to navigate the challenges of climate change, social inequality, and economic disruption, the role of Environmental, Social, and Governance (ESG) frameworks in guiding corporate responsibility has never been more crucial. With an increasing focus by the general public, customers and investors on cl&#8230;&quot;,&quot;cta&quot;:null,&quot;showBylines&quot;:true,&quot;size&quot;:&quot;lg&quot;,&quot;isEditorNode&quot;:true,&quot;title&quot;:&quot;Unpacking ESG: Why Cybersecurity Deserves a Seat at the Table&quot;,&quot;publishedBylines&quot;:[{&quot;id&quot;:58943547,&quot;name&quot;:&quot;Dennis Lindwall&quot;,&quot;bio&quot;:&quot;Banking and Fintech aficionado. Passionately interested in the strategic challenges presented by digital disruption in the financial services industry, regtech and cybersecurity. \nCISO | Ops Resilience | Risk &amp; Governance | FinTech | Consulting&quot;,&quot;photo_url&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/34f5dd2d-036c-4391-a0af-8930cca6f200_1024x1024.webp&quot;,&quot;is_guest&quot;:false,&quot;bestseller_tier&quot;:null}],&quot;post_date&quot;:&quot;2024-08-23T22:26:48.644Z&quot;,&quot;cover_image&quot;:&quot;https://images.unsplash.com/photo-1523287562758-66c7fc58967f?crop=entropy&amp;cs=tinysrgb&amp;fit=max&amp;fm=jpg&amp;ixid=M3wzMDAzMzh8MHwxfHNlYXJjaHw1Nnx8Y29ycG9yYXRlJTIwZXNnfGVufDB8fHx8MTcyNDQ1MTkwN3ww&amp;ixlib=rb-4.0.3&amp;q=80&amp;w=1080&quot;,&quot;cover_image_alt&quot;:null,&quot;canonical_url&quot;:&quot;https://tekk.substack.com/p/unpacking-esg-why-cybersecurity-deserves&quot;,&quot;section_name&quot;:null,&quot;video_upload_id&quot;:null,&quot;id&quot;:148058118,&quot;type&quot;:&quot;newsletter&quot;,&quot;reaction_count&quot;:0,&quot;comment_count&quot;:0,&quot;publication_id&quot;:null,&quot;publication_name&quot;:&quot;TEKK Talk&quot;,&quot;publication_logo_url&quot;:&quot;https://substackcdn.com/image/fetch/f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fcaafd59c-454f-46f0-9fef-77ca47321c13_1024x1024.png&quot;,&quot;belowTheFold&quot;:true,&quot;youtube_url&quot;:null,&quot;show_links&quot;:null,&quot;feed_url&quot;:null}"></div><p><br></p><div class="subscription-widget-wrap-editor" data-attrs="{&quot;url&quot;:&quot;https://www.tekk-talk.com/subscribe?&quot;,&quot;text&quot;:&quot;Subscribe&quot;,&quot;language&quot;:&quot;en&quot;}" data-component-name="SubscribeWidgetToDOM"><div class="subscription-widget show-subscribe"><div class="preamble"><p class="cta-caption">Thanks for reading TEKK Talk! Subscribe for free to receive new posts and support my work.</p></div><form class="subscription-widget-subscribe"><input type="email" class="email-input" name="email" placeholder="Type your email&#8230;" tabindex="-1"><input type="submit" class="button primary" value="Subscribe"><div class="fake-input-wrapper"><div class="fake-input"></div><div class="fake-button"></div></div></form></div></div>]]></content:encoded></item><item><title><![CDATA[SaaS Security Alert: The Hidden Threat of Domain Fronting]]></title><description><![CDATA[Uncover the growing security challenges of domain fronting in SaaS environments and explore effective strategies to protect your organization.]]></description><link>https://www.tekk-talk.com/p/saas-domain-fronting-threat</link><guid isPermaLink="false">https://www.tekk-talk.com/p/saas-domain-fronting-threat</guid><dc:creator><![CDATA[Dennis Lindwall]]></dc:creator><pubDate>Thu, 19 Sep 2024 20:48:33 GMT</pubDate><enclosure url="https://substackcdn.com/image/fetch/$s_!-FHn!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F19b47e72-5318-4848-bb10-2e5123e5aa35_1152x640.jpeg" length="0" type="image/jpeg"/><content:encoded><![CDATA[<div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!-FHn!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F19b47e72-5318-4848-bb10-2e5123e5aa35_1152x640.jpeg" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!-FHn!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F19b47e72-5318-4848-bb10-2e5123e5aa35_1152x640.jpeg 424w, https://substackcdn.com/image/fetch/$s_!-FHn!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F19b47e72-5318-4848-bb10-2e5123e5aa35_1152x640.jpeg 848w, https://substackcdn.com/image/fetch/$s_!-FHn!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F19b47e72-5318-4848-bb10-2e5123e5aa35_1152x640.jpeg 1272w, https://substackcdn.com/image/fetch/$s_!-FHn!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F19b47e72-5318-4848-bb10-2e5123e5aa35_1152x640.jpeg 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!-FHn!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F19b47e72-5318-4848-bb10-2e5123e5aa35_1152x640.jpeg" width="1152" height="640" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/19b47e72-5318-4848-bb10-2e5123e5aa35_1152x640.jpeg&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:&quot;normal&quot;,&quot;height&quot;:640,&quot;width&quot;:1152,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:null,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:null,&quot;href&quot;:null,&quot;belowTheFold&quot;:false,&quot;topImage&quot;:true,&quot;internalRedirect&quot;:null,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!-FHn!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F19b47e72-5318-4848-bb10-2e5123e5aa35_1152x640.jpeg 424w, https://substackcdn.com/image/fetch/$s_!-FHn!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F19b47e72-5318-4848-bb10-2e5123e5aa35_1152x640.jpeg 848w, https://substackcdn.com/image/fetch/$s_!-FHn!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F19b47e72-5318-4848-bb10-2e5123e5aa35_1152x640.jpeg 1272w, https://substackcdn.com/image/fetch/$s_!-FHn!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F19b47e72-5318-4848-bb10-2e5123e5aa35_1152x640.jpeg 1456w" sizes="100vw" fetchpriority="high"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a><figcaption class="image-caption">cloud security</figcaption></figure></div><h2>They Know What They&#8217;re Doing, Right?</h2><p>As our products and solutions increasingly rely on Software as a Service (SaaS) to both increase speed to market and optimize costs, businesses are unintentionally opening a Pandora's box of security vulnerabilities. As these cloud-based services, hosted on sprawling content delivery networks (CDNs) like Akamai, become the backbone of our daily operations, they present a double-edged sword. On one side, they offer unprecedented efficiency and scalability. On the other, they create a complex, often opaque security landscape that traditional safeguards struggle to navigate.</p><h3><em>"While SaaS services offer unprecedented efficiency and scalability, they also create a complex security landscape that traditional safeguards struggle to navigate."</em></h3><p>The crux of the problem lies in one of the most popular features found in SaaS: end-to-end encryption. While this encryption, typically implemented through TLS, shields our data from prying eyes, it also blinds our security teams. Many SaaS services are incompatible with "TLS interception," a crucial cybersecurity tool that allows your cyberoperations team to verify the integrity of data transmissions and apply necessary security controls. This incompatibility leaves the door ajar for sophisticated threats like data exfiltration, malware infiltration, and perhaps most alarmingly, "domain fronting" - a cunning technique that allows malicious actors to slip past content restrictions undetected.</p><div class="subscription-widget-wrap-editor" data-attrs="{&quot;url&quot;:&quot;https://www.tekk-talk.com/subscribe?&quot;,&quot;text&quot;:&quot;Subscribe&quot;,&quot;language&quot;:&quot;en&quot;}" data-component-name="SubscribeWidgetToDOM"><div class="subscription-widget show-subscribe"><div class="preamble"><p class="cta-caption">Thanks for reading TEKK Talk! Subscribe for free to receive new posts and support my work.</p></div><form class="subscription-widget-subscribe"><input type="email" class="email-input" name="email" placeholder="Type your email&#8230;" tabindex="-1"><input type="submit" class="button primary" value="Subscribe"><div class="fake-input-wrapper"><div class="fake-input"></div><div class="fake-button"></div></div></form></div></div><p>Here I intend to unravel the complexities of domain fronting, assess its risks in our SaaS-dependent world, and explore strategies that help bolster our defences against this elusive threat. The digital transformation journey is inevitable for almost every company I can think of, but it doesn't have to be a leap into the unknown. Armed with knowledge and strategic foresight, we can harness the power of SaaS while keeping our digital borders secure.</p><h2><strong>Understanding Domain Fronting</strong></h2><p>Domain fronting is a technique that obscures the true destination of HTTPS traffic; it&#8217;s a technique where malicious actors disguise the true destination of HTTPS traffic by using a permitted domain in the TLS handshake (Server Name Indication [SNI] extension) while routing the request to a different, often malicious, destination via the HTTP Host header. But why would anyone want to do this? Well, threat actors use domain fronting to bypass security and hide malicious traffic by making it look like it's going to a trusted site. <br>This effectively makes it a lot harder to detect certain malicious activities. Specifically making it harder to detect malware distribution and data exfiltration. And while we can detect the initial connection to the front domain, we can't see the actual destination without decrypting the traffic, which is often not possible with modern encryption standards and practices. It exploits the discrepancy between the domain name specified in the TLS SNI extension and the HTTP Host header.</p><p>Here's how it works:</p><ol><li><p>A client initiates a connection to a permitted domain (e.g., a popular CDN).</p></li><li><p>During the TLS handshake, the SNI extension contains this permitted domain.</p></li><li><p>Once the encrypted connection is established, the client sends an HTTP request with a different Host header, pointing to the actual intended destination.</p></li><li><p>The front-end server (often a CDN) routes the request based on the Host header to the actual destination server.</p></li></ol><p>This technique effectively hides the true destination of the traffic from network monitors or censors that only inspect the SNI or IP-level information.</p><h2><strong>Implications and Challenges</strong></h2><p>By not being able to intercept the traffic we cannot detect and protect against those eventualities. Whilst mitigating controls exist for data exfiltration and malware, there are no alternative controls yet available for "domain fronting". It should be noted that domain fronting is not trivial to execute; specialist technical knowledge is required, and the perpetrator needs to have their own malicious content served on the same CDN or cloud provider being used to facilitate the attack.</p><p>As domain fronting remains a significant challenge, it is essential for organizations to work closely with cloud service providers to push for stronger detection and prevention mechanisms within their networks. By collaborating with key technology partners, we can drive innovations in network security, ensuring that effective solutions are developed and implemented to mitigate these risks across the cloud ecosystem.</p><p><strong>Domain Fronting in SaaS Environments: A Comprehensive Risk Assessment</strong></p><p>The proliferation of SaaS services significantly expands the potential attack surface for domain fronting attempts. As more business operations move to the cloud, attackers gain additional opportunities to exploit vulnerabilities in these systems. Additionally, as domain fronting becomes more widely adopted by threat actor groups, we can expect increasingly sophisticated techniques to emerge, aimed at evading detection and exploiting this vulnerability more effectively.</p><p>It's crucial to recognize that the threat doesn't solely come from external sources. Insider threats, such as disgruntled employees, could potentially leverage domain fronting for data exfiltration. This internal risk adds another layer of complexity to the security landscape, requiring organizations to balance trust with vigilance in their security strategies.</p><p>Interestingly, the same features that make SaaS solutions so appealing - encryption and ease of use - also create vulnerabilities. While encryption effectively protects data confidentiality, it simultaneously limits our ability to inspect traffic for malicious activities. In fact, encryption is doing exactly what it was designed to do, so the challenge lies not in the technology itself but in finding ways to address the double-edged sword it presents. <br>For example, many SaaS services are incompatible with TLS interception, which further limits our ability to detect potential threats. This leaves us in a position where we must rely on the security measures of our SaaS providers; measures that may not always align with our specific security needs and risk tolerance. So how do we bridge this gap?</p><h3><em>"Domain fronting exploits the very encryption that makes SaaS appealing, potentially enabling data exfiltration or malware distribution."</em></h3><p>The potential impact of a successful domain fronting attack cannot be overstated. At its most severe, it could lead to unauthorized data access or exfiltration, potentially resulting in significant financial losses and long-lasting reputational damage. Beyond the immediate impact, organizations must also consider the regulatory implications. In an era of stringent data protection regulations (GDPR, etc.), the inability to detect or prevent domain fronting could lead to non-compliance, resulting in hefty fines and legal complications.</p><p>Moreover, responding to a successful attack demands significant resources and can disrupt business operations. The effort required to investigate the breach, mitigate its impact, and strengthen preventive measures could lead to productivity losses, IT strain, and potential fines or regulatory sanctions.</p><p>Given these risks, it's clear that a robust mitigation strategy is essential. This strategy should embrace a defence-in-depth approach, combining multiple layers of security measures. Network segmentation, strict egress filtering, and advanced behavioural analysis of network traffic can work in concert to create a more resilient security posture. However, technology alone is not enough. Regular employee training about the risks of domain fronting and best practices for secure SaaS usage is crucial in creating a security-conscious culture.</p><p>It's also vital to maintain a continuous monitoring and assessment process. The threat landscape is constantly evolving, and our security measures must evolve with it. Regular evaluation of our SaaS providers' security measures and their alignment with our security requirements should be an integral part of this process.</p><p>While the risks are significant, it's important to balance them against the benefits that SaaS solutions provide. The productivity gains, cost efficiencies, and competitive advantages offered by these services are substantial. Organizations must carefully weigh these benefits against the potential costs of a security breach and the investment required for robust security measures. This risk-benefit analysis should inform decisions about SaaS usage and security investments.</p><p>It's crucial to acknowledge that even with the most comprehensive mitigation strategies in place, some level of risk will always remain. This residual risk needs to be clearly understood, documented, and accepted by leadership as part of the organization's overall risk appetite. Regular reassessment of this risk tolerance is necessary as both the threat landscape and business needs evolve over time.</p><p>In conclusion, while the risk of domain fronting in SaaS environments is significant and growing, it can be effectively managed through a comprehensive, ongoing risk assessment and mitigation strategy. The key lies in striking a delicate balance between security and the business value derived from SaaS usage. However, understanding the risks is only half the battle. To truly address the challenge of domain fronting, organizations need to implement concrete, effective mitigation strategies. In the following section, we will explore a range of techniques and best practices, from network segmentation to zero trust architectures, that organizations can employ to create a robust defence against this evolving threat.</p><h2><strong>Domain Fronting Mitigation Strategies: A Comprehensive Approach</strong></h2><p>While engaging with your Cloud Service Providers is important, there are several strategies we can implement to mitigate the risks associated with domain fronting. These approaches vary in their effectiveness, cost, and complexity, but each contributes to a more robust defence against this sophisticated threat.</p><p>Network segmentation is a highly effective, albeit moderately costly, approach to containing potential threats. By dividing our network into smaller subnetworks, we can limit the potential impact of a successful attack. This strategy effectively contains threats and restricts lateral movement within the network, making it much harder for attackers to exploit domain fronting even if they manage to breach one segment of the network. While the implementation can be complex and potentially disruptive, the long-term benefits for security are substantial.</p><p>Another highly effective strategy, which comes at a lower cost, is strict egress filtering. This involves implementing rigorous rules for outbound traffic, allowing only necessary connections to trusted domains. By carefully controlling what can leave our network, we significantly reduce the attack surface for domain fronting attempts. This approach requires careful planning to ensure legitimate traffic isn't blocked, but once implemented, it provides a strong defence against unauthorized communications.</p><p>Behavioural analysis of network traffic offers a more dynamic approach to identifying potential domain fronting attempts. While moderately expensive and complex to implement, this strategy involves deploying advanced security analytics tools to detect anomalies in network traffic patterns. These tools can potentially identify domain fronting attempts based on unusual traffic behaviours, providing an additional layer of defence that can adapt to new threats. The effectiveness of this approach can be quite high, especially when combined with other strategies, though it does require ongoing maintenance and tuning to remain effective.</p><p>A relatively low-cost measure that can enhance our overall security posture is the use of Encrypted Server Name Indication (ESNI) where possible. By enabling ESNI on supported servers and clients, we can enhance privacy and make it more difficult for attackers to exploit SNI information in their domain fronting attempts. While the effectiveness is moderate and dependent on widespread support, the low cost and ease of implementation make this a worthwhile addition to our security toolkit.</p><p>Finally, for organizations looking for a comprehensive, albeit high-cost solution, implementing a Zero Trust Network Access (ZTNA) model can provide significant protection against domain fronting and a host of other threats. This approach involves verifying every access attempt, regardless of its source, operating on the principle of "never trust, always verify." While the implementation of a zero-trust architecture is complex and costly, it offers a high level of effectiveness in reducing the risk of unauthorized access, even if domain fronting is successful. This model represents a fundamental shift in network security thinking and can provide long-term benefits that extend far beyond just mitigating domain fronting risks.</p><p>In considering these strategies, it's important to recognize that a layered approach, implementing multiple complementary measures, often provides the most robust defence. Starting with lower-cost, high-impact measures like egress filtering, and gradually building up to more complex solutions like network segmentation and behavioural analysis, can help organizations balance security needs with budget constraints. The key is to begin strengthening our defences now, rather than waiting for perfect solutions from third-party providers.</p><h2><strong>Cost-Effective Approach to Mitigation</strong></h2><p>Considering the balance between effectiveness and cost, a phased approach to implementation could be to address this problem as journey:</p><p><strong>Start with Strict Egress Filtering</strong>: This offers a high impact at a relatively low cost. By limiting outbound connections to only necessary and trusted domains, you can significantly reduce the risk of domain fronting.</p><p><strong>Implement Basic Network Segmentation</strong>: While full segmentation can be costly, starting with basic segmentation of critical assets can provide substantial security benefits at a moderate cost.</p><p><strong>Gradually Introduce Behavioural Analysis</strong>: Begin with basic traffic analysis and gradually invest in more advanced behavioural analysis tools as budget allows.</p><p><strong>Enable &#8216;Encrypted&nbsp;Server Name Indication&#8217; (ESNI) Where Supported</strong>: This is a low-cost measure that can be implemented on supported systems to enhance overall security.</p><p><strong>Long-term Goal - Zero Trust</strong>: While costly and complex, moving towards a zero-trust model provides comprehensive protection against various threats, including domain fronting.</p><h2><strong>Looking forward</strong></h2><p>The widespread adoption of SaaS solutions presents a double-edged sword for organizations: while offering unprecedented efficiency and scalability, it also introduces complex security challenges like domain fronting. This sophisticated technique exploits the very encryption that makes SaaS appealing, potentially enabling data exfiltration or malware distribution. Even security-conscious organizations find themselves grappling with this issue, often constrained by the critical nature of their SaaS dependencies. While strategies such as network segmentation, strict egress filtering, and behavioural analysis can mitigate risks, the rapidly evolving threat landscape demands ongoing vigilance. As we navigate this intricate security terrain, the path forward lies in fostering stronger collaborations between organizations and their SaaS providers, continuous adaptation of security measures, and a delicate balance between leveraging SaaS benefits and maintaining robust security postures. The complexity of this challenge underscores the need for innovative solutions and a shared responsibility in safeguarding our interconnected digital ecosystem.</p><p><a href="https://blogs.juniper.net/en-us/threat-research/abused-cdns-from-speedy-content-to-stealthy-malware">Examples and cases: Abused CDNs: From Speedy Content to Stealthy Malware</a></p><p><a href="https://blog.talosintelligence.com/attackers-use-domain-fronting-technique/">Domain Fronting in an attack: Attackers use domain fronting technique to target Myanmar with Cobalt Strike</a></p><p><a href="https://doi.org/10.1016/j.cose.2024.103976">Additional Reading: Detecting network covert channel of domain fronting with throughput fluctuation</a></p><p>If you found this interesting, you might also like this:</p><div class="digest-post-embed" data-attrs="{&quot;nodeId&quot;:&quot;01aba235-f314-4a3b-9a31-b8fb01cb9fce&quot;,&quot;caption&quot;:&quot;As cyber threats continue to advance, securing systems with strong authentication remains essential. Yet, hidden within many external facing systems around the globe is a vulnerability so basic that it often goes unnoticed. Basic Authentication, a method from the early days of the internet, still poses a significant security risk in many organizat&#8230;&quot;,&quot;cta&quot;:null,&quot;showBylines&quot;:true,&quot;size&quot;:&quot;lg&quot;,&quot;isEditorNode&quot;:true,&quot;title&quot;:&quot;The Easy Hack You Never Saw Coming&quot;,&quot;publishedBylines&quot;:[{&quot;id&quot;:58943547,&quot;name&quot;:&quot;Dennis Lindwall&quot;,&quot;bio&quot;:&quot;Banking and Fintech aficionado. Passionately interested in the strategic challenges presented by digital disruption in the financial services industry, regtech and cybersecurity. \nCISO | Ops Resilience | Risk &amp; Governance | FinTech | Consulting&quot;,&quot;photo_url&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/34f5dd2d-036c-4391-a0af-8930cca6f200_1024x1024.webp&quot;,&quot;is_guest&quot;:false,&quot;bestseller_tier&quot;:null}],&quot;post_date&quot;:&quot;2024-08-23T22:05:40.846Z&quot;,&quot;cover_image&quot;:&quot;https://substackcdn.com/image/fetch/f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Ffaa040ac-a158-4233-ad5a-147d6eaeb394_1152x640.jpeg&quot;,&quot;cover_image_alt&quot;:null,&quot;canonical_url&quot;:&quot;https://tekk.substack.com/p/the-easy-hack-you-never-saw-coming&quot;,&quot;section_name&quot;:null,&quot;video_upload_id&quot;:null,&quot;id&quot;:148041014,&quot;type&quot;:&quot;newsletter&quot;,&quot;reaction_count&quot;:0,&quot;comment_count&quot;:0,&quot;publication_id&quot;:null,&quot;publication_name&quot;:&quot;TEKK Talk&quot;,&quot;publication_logo_url&quot;:&quot;https://substackcdn.com/image/fetch/f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fcaafd59c-454f-46f0-9fef-77ca47321c13_1024x1024.png&quot;,&quot;belowTheFold&quot;:true,&quot;youtube_url&quot;:null,&quot;show_links&quot;:null,&quot;feed_url&quot;:null}"></div><p> </p><div class="subscription-widget-wrap-editor" data-attrs="{&quot;url&quot;:&quot;https://www.tekk-talk.com/subscribe?&quot;,&quot;text&quot;:&quot;Subscribe&quot;,&quot;language&quot;:&quot;en&quot;}" data-component-name="SubscribeWidgetToDOM"><div class="subscription-widget show-subscribe"><div class="preamble"><p class="cta-caption">Thanks for reading TEKK Talk! Subscribe for free to receive new posts and support my work.</p></div><form class="subscription-widget-subscribe"><input type="email" class="email-input" name="email" placeholder="Type your email&#8230;" tabindex="-1"><input type="submit" class="button primary" value="Subscribe"><div class="fake-input-wrapper"><div class="fake-input"></div><div class="fake-button"></div></div></form></div></div>]]></content:encoded></item><item><title><![CDATA[Cybersecurity's Impact on ESG]]></title><description><![CDATA[The New Frontier in Corporate Sustainability]]></description><link>https://www.tekk-talk.com/p/cybersecurity-sustainability-esg-impact</link><guid isPermaLink="false">https://www.tekk-talk.com/p/cybersecurity-sustainability-esg-impact</guid><dc:creator><![CDATA[Dennis Lindwall]]></dc:creator><pubDate>Sat, 24 Aug 2024 18:17:34 GMT</pubDate><enclosure url="https://substackcdn.com/image/fetch/$s_!zIHa!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F1612622f-828f-428b-a44f-1012e917e315_1792x1024.webp" length="0" type="image/jpeg"/><content:encoded><![CDATA[<div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!zIHa!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F1612622f-828f-428b-a44f-1012e917e315_1792x1024.webp" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!zIHa!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F1612622f-828f-428b-a44f-1012e917e315_1792x1024.webp 424w, https://substackcdn.com/image/fetch/$s_!zIHa!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F1612622f-828f-428b-a44f-1012e917e315_1792x1024.webp 848w, https://substackcdn.com/image/fetch/$s_!zIHa!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F1612622f-828f-428b-a44f-1012e917e315_1792x1024.webp 1272w, https://substackcdn.com/image/fetch/$s_!zIHa!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F1612622f-828f-428b-a44f-1012e917e315_1792x1024.webp 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!zIHa!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F1612622f-828f-428b-a44f-1012e917e315_1792x1024.webp" width="1456" height="832" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/1612622f-828f-428b-a44f-1012e917e315_1792x1024.webp&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:832,&quot;width&quot;:1456,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:854080,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:&quot;image/webp&quot;,&quot;href&quot;:null,&quot;belowTheFold&quot;:false,&quot;topImage&quot;:true,&quot;internalRedirect&quot;:null,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!zIHa!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F1612622f-828f-428b-a44f-1012e917e315_1792x1024.webp 424w, https://substackcdn.com/image/fetch/$s_!zIHa!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F1612622f-828f-428b-a44f-1012e917e315_1792x1024.webp 848w, https://substackcdn.com/image/fetch/$s_!zIHa!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F1612622f-828f-428b-a44f-1012e917e315_1792x1024.webp 1272w, https://substackcdn.com/image/fetch/$s_!zIHa!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F1612622f-828f-428b-a44f-1012e917e315_1792x1024.webp 1456w" sizes="100vw" fetchpriority="high"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a></figure></div><p>In an era where digital transformation intersects with corporate responsibility, cybersecurity has become an increasingly significant factor in shaping an organisation's sustainability profile. Traditionally, sustainability has been associated with renewable energy initiatives and efforts to reduce carbon footprints. However, in our interconnected world, the security of digital assets and information systems is also crucial for a company's long-term viability and societal impact.</p><div class="preformatted-block" data-component-name="PreformattedTextBlockToDOM"><label class="hide-text" contenteditable="false">Text within this block will maintain its original spacing when published</label><pre class="text"><strong>"Cybersecurity is no longer just a technical issue; it is a critical factor in corporate sustainability, shaping risk management, stakeholder trust, and long-term resilience."</strong></pre></div><p>Cybersecurity practices extend beyond technical safeguards; they are integral to a company's risk management strategy, governance structure, and social responsibility efforts. As such, these practices profoundly influence corporate sustainability ratings, often in ways that are not immediately apparent. This article delves into the complex relationship between cybersecurity practices and corporate sustainability ratings, illustrating how robust digital defences can enhance a company's overall sustainability profile and its appeal to socially conscious investors and stakeholders.</p><div class="subscription-widget-wrap-editor" data-attrs="{&quot;url&quot;:&quot;https://www.tekk-talk.com/subscribe?&quot;,&quot;text&quot;:&quot;Subscribe&quot;,&quot;language&quot;:&quot;en&quot;}" data-component-name="SubscribeWidgetToDOM"><div class="subscription-widget show-subscribe"><div class="preamble"><p class="cta-caption">Thanks for reading Dennis&#8217;s Substack! Subscribe for free to receive new posts and support my work.</p></div><form class="subscription-widget-subscribe"><input type="email" class="email-input" name="email" placeholder="Type your email&#8230;" tabindex="-1"><input type="submit" class="button primary" value="Subscribe"><div class="fake-input-wrapper"><div class="fake-input"></div><div class="fake-button"></div></div></form></div></div><h2><strong>The Strategic Imperative of Cybersecurity in Sustainability</strong></h2><p>As businesses undergo digital transformation, cybersecurity has emerged as a strategic imperative, not just a technical necessity<a class="footnote-anchor" data-component-name="FootnoteAnchorToDOM" id="footnote-anchor-1" href="#footnote-1" target="_self">1</a>. Companies increasingly recognise that a proactive cybersecurity strategy is essential for safeguarding their competitive edge, ensuring business continuity, and maintaining investor confidence. The strategic importance of cybersecurity becomes particularly evident as cyber threats evolve in complexity and scale.</p><p>A robust cybersecurity framework protects a company's intellectual property, customer data, and critical infrastructure, thereby preventing potential losses that could significantly impact a company's reputation and financial stability. Moreover, by incorporating cybersecurity into their broader risk management and governance strategies, companies demonstrate a commitment to transparency, accountability, and long-term resilience; critical factors for achieving high sustainability ratings<a class="footnote-anchor" data-component-name="FootnoteAnchorToDOM" id="footnote-anchor-2" href="#footnote-2" target="_self">2</a>.</p><h3><strong>Cybersecurity's Role in Broader ESG Trends</strong></h3><p>Cybersecurity is becoming a crucial component within the broader ESG framework, influencing trends such as digital ethics, corporate transparency, and stakeholder capitalism. These trends highlight the need for companies to protect digital rights and maintain trust in an era of increasing digital dependence.</p><p><strong>Digital Ethics and Data Privacy:</strong> Incorporating digital ethics into ESG practices involves ensuring that companies respect data privacy and use data responsibly. Robust cybersecurity measures help protect sensitive information, prevent misuse, and reinforce a company's commitment to ethical standards. As data breaches become more frequent and severe, organisations that prioritise cybersecurity demonstrate their dedication to ethical conduct and social responsibility.</p><p><strong>Corporate Transparency:</strong> Transparency in cybersecurity practices and incident reporting reflects a company's commitment to good governance. Companies that disclose their cybersecurity policies and vulnerabilities, while also managing them effectively, demonstrate accountability. This openness fosters trust among stakeholders and positively influences their sustainability scores.</p><p><strong>Stakeholder Capitalism:</strong> In the context of stakeholder capitalism, cybersecurity is vital not just for protecting shareholders but for safeguarding all stakeholders, including customers, employees, and partners. Companies that prioritise cybersecurity are seen as responsible entities committed to protecting the interests of all stakeholders, thereby enhancing their social sustainability profile.</p><h3><strong>Actionable Insights for Investors and Companies</strong></h3><p>To effectively integrate cybersecurity into sustainability strategies and assessments, both investors and companies can adopt specific approaches.</p><p>For Investors:</p><ul><li><p>Evaluate Board Oversight: Assess whether a company's board includes dedicated oversight for cybersecurity and whether cybersecurity is integrated into broader governance discussions. Board-level attention to cybersecurity indicates its strategic importance within the organisation.</p></li><li><p>Review Incident Response Capabilities: Look for evidence of robust incident response plans, including how quickly a company can detect, contain, and recover from cyber incidents. Effective incident response is crucial for minimising the impact of breaches.</p></li><li><p>Analyse Cybersecurity Investments: Examine the proportion of the IT budget allocated to cybersecurity and whether there are ongoing investments in advanced security technologies, such as AI-driven threat detection. A commitment to continuous improvement in cybersecurity reflects an organisation's proactive stance on risk management.</p></li></ul><p>For Companies:</p><ul><li><p>Develop Comprehensive Cybersecurity Policies: Ensure that cybersecurity policies cover all aspects of digital risk management, from data protection to incident response. Comprehensive policies help mitigate risks and enhance resilience.</p></li><li><p>Conduct Regular Risk Assessments: Regularly assess cybersecurity risks and update strategies to adapt to evolving threats. This continuous improvement approach ensures that defences remain robust against new and emerging risks.</p></li><li><p>Enhance Transparency: Provide clear communication about cybersecurity efforts and incidents to build trust with stakeholders and improve governance scores. Transparency in cybersecurity practices is a hallmark of good corporate governance.</p></li></ul><h2><strong>Emerging Metrics and Standards in Cybersecurity for ESG</strong></h2><p>As sustainability rating agencies evolve to incorporate cybersecurity factors, several key metrics are emerging to assess a company's digital resilience:</p><ul><li><p>Cyber Resilience Score: Measures a company's ability to prevent, detect, and recover from cyber incidents, reflecting its overall digital defence capabilities.</p></li><li><p>Data Protection Index: Evaluates the robustness of data protection measures and compliance with relevant regulations, such as GDPR and CCPA.</p></li><li><p>Cyber Governance Maturity: Assesses the integration of cybersecurity into corporate governance structures, including board oversight, cybersecurity policies, and incident management frameworks.</p></li><li><p>Incident Response Transparency: Reflects the company's openness in disclosing and addressing cybersecurity incidents, which can influence governance and social responsibility scores.</p></li><li><p>Cybersecurity Investment Ratio: Compares cybersecurity spending to the overall IT budget or revenue, indicating a company's commitment to digital security.</p></li></ul><h3><strong>Addressing the Dynamic Nature of Cyber Threats and Building Resilience</strong></h3><p>The rapidly evolving nature of cyber threats requires companies to adopt a continuous improvement approach to cybersecurity. This involves not only implementing robust security measures but also fostering a culture of security throughout the organisation. Cybersecurity should be seen as an ongoing process that necessitates regular updates, training, and adaptation to new threats.</p><p><strong>Building Cyber Resilience:</strong> Organisations can enhance their resilience by learning from past incidents and integrating those lessons into future strategies. This approach ensures that cybersecurity defences are constantly evolving, ready to counter new threats, and aligned with the company's overall sustainability goals.</p><p><strong>Creating a Culture of Security:</strong> Establishing a culture of security involves making cybersecurity a shared responsibility across all organisational levels. By training employees, encouraging secure practices, and promoting awareness, companies can reduce the likelihood of human error, a common cause of security breaches.</p><h2><strong>Future Trends and the Role of Technology in Cybersecurity and ESG</strong></h2><p>Emerging technologies such as AI, machine learning, and blockchain are transforming cybersecurity practices, offering new ways to enhance digital defences and, consequently, ESG performance.</p><p><strong>AI and Machine Learning:</strong> These technologies are increasingly used for threat detection and response, providing faster and more accurate identification of potential threats. Their integration into cybersecurity frameworks can significantly improve a company's resilience and sustainability rating.</p><p><strong>Blockchain for Security and Transparency:</strong> Blockchain technology offers potential solutions for secure data management and transparency in transactions, contributing to both cybersecurity and governance goals within the ESG framework.</p><p><strong>Case Studies: Learning from Success and Failure</strong></p><p>Real-world examples highlight the impact of cybersecurity on sustainability ratings, offering valuable lessons for companies aiming to enhance their ESG profiles.</p><ul><li><p><strong>Equifax Data Breach:</strong> In 2017, Equifax experienced a massive data breach that exposed the personal information of over 147 million consumers. The incident led to severe financial losses, regulatory penalties, and a significant downgrade in its sustainability rating. This breach underscored the importance of robust cybersecurity practices and the consequences of failing to implement them effectively. Following the breach, Equifax invested heavily in upgrading its cybersecurity infrastructure and improving transparency, which eventually helped recover some of its ESG standing.</p></li><li><p><strong>Microsoft's Proactive Cybersecurity Strategy:</strong> Microsoft has invested significantly in cybersecurity, integrating advanced threat detection systems and fostering a culture of continuous improvement and transparency. These efforts have positively impacted Microsoft's sustainability ratings, demonstrating that proactive cybersecurity strategies can enhance corporate reputation and attract socially responsible investors.</p></li></ul><h3><strong>The Role of Supply Chain Cybersecurity in Sustainability</strong></h3><p>A company's cybersecurity is only as strong as its weakest link, often found in its supply chain. As part of their sustainability strategies, companies are increasingly assessing the cybersecurity practices of their suppliers and partners to mitigate third-party risks.</p><p><strong>Supply Chain Risk Management:</strong> Effective supply chain cybersecurity involves evaluating third-party risk, establishing cybersecurity requirements in supplier contracts, and coordinating incident response efforts across the supply chain. This comprehensive approach ensures that the entire ecosystem is secure, thereby enhancing the company's overall sustainability profile.</p><p><strong>Geopolitical and Regulatory Impacts on Cybersecurity and ESG</strong></p><p>The geopolitical and regulatory landscapes significantly impact how companies approach cybersecurity and sustainability. Different regions have varying requirements for data protection and cybersecurity, which affect corporate strategies and sustainability ratings.</p><p><strong>Navigating Regulatory Landscapes:</strong> Understanding the regulatory environment across different regions is crucial for global companies. Adapting cybersecurity strategies to comply with diverse regulations can improve a company's governance score and reduce the risk of penalties and reputational damage.</p><p><strong>Mitigating Geopolitical Risks:</strong> Geopolitical tensions can exacerbate cybersecurity risks, such as state-sponsored attacks or cross-border data breaches. Companies must develop strategies to mitigate these risks, demonstrating robust cybersecurity practices to enhance their ESG profiles.</p><h2><strong>Summary</strong></h2><p>The influence of cybersecurity practices on corporate sustainability ratings is increasingly significant. As digital technologies become integral to every aspect of business operations, the security of these systems becomes synonymous with the security of the business itself.</p><p>Forward-thinking companies recognise that robust cybersecurity is not just a technical necessity but a key driver of their sustainability profile. By investing in strong cyber defences, fostering a culture of digital responsibility, and transparently communicating their efforts, organisations can significantly enhance their sustainability ratings.</p><p>For investors, regulators, and other stakeholders, understanding the link between cybersecurity and corporate sustainability is crucial. It provides a more comprehensive view of a company's risk profile, governance quality, and commitment to social responsibility in the digital age.</p><p>As we move further into an era where digital and physical realities are increasingly intertwined, the concept of "green screens" - where cybersecurity and sustainability converge - will become an essential lens through which corporate responsibility is viewed and evaluated. Companies that excel in this area will not only be more resilient to digital threats but will also be better positioned to thrive in a world where sustainability and technology go hand in hand.</p><p>Here are some links for additional deep-dives into the topic of risk management and ESG <a href="https://doi.org/10.1007/978-3-030-38858-4_6">The Evolving Risk Management Opportunity and Thinking Sustainability First</a>, and <a href="https://doi.org/10.1108/IJPPM-10-2023-0582">Effect of ERM  for ESG risks towards green growth</a></p><p>Want to stay updated on the latest in cybersecurity, technology and sustainability? Subscribe to our newsletter for weekly insights and analysis.</p><p>This article is part of a series of articles written with a focus on Cybersecurity and ESG. The first article in the series is this one:</p><div class="digest-post-embed" data-attrs="{&quot;nodeId&quot;:&quot;a6b5d754-f80f-482e-981c-bf10e4fa8a3f&quot;,&quot;caption&quot;:&quot;As the world continues to navigate the challenges of climate change, social inequality, and economic disruption, the role of Environmental, Social, and Governance (ESG) frameworks in guiding corporate responsibility has never been more crucial. With an increasing focus by the general public, customers and investors on cl&#8230;&quot;,&quot;cta&quot;:null,&quot;showBylines&quot;:true,&quot;size&quot;:&quot;lg&quot;,&quot;isEditorNode&quot;:true,&quot;title&quot;:&quot;Unpacking ESG: Why Cybersecurity Deserves a Seat at the Table&quot;,&quot;publishedBylines&quot;:[{&quot;id&quot;:58943547,&quot;name&quot;:&quot;Dennis Lindwall&quot;,&quot;bio&quot;:&quot;Banking and Fintech aficionado. Passionately interested in the strategic challenges presented by digital disruption in the financial services industry, regtech and cybersecurity. \nCISO | Ops Resilience | Risk &amp; Governance | FinTech | Consulting&quot;,&quot;photo_url&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/d1faa622-157f-4c11-b86f-5cc03a32ca10_144x144.png&quot;,&quot;is_guest&quot;:false,&quot;bestseller_tier&quot;:null}],&quot;post_date&quot;:&quot;2024-08-23T22:26:48.644Z&quot;,&quot;cover_image&quot;:&quot;https://images.unsplash.com/photo-1523287562758-66c7fc58967f?crop=entropy&amp;cs=tinysrgb&amp;fit=max&amp;fm=jpg&amp;ixid=M3wzMDAzMzh8MHwxfHNlYXJjaHw1Nnx8Y29ycG9yYXRlJTIwZXNnfGVufDB8fHx8MTcyNDQ1MTkwN3ww&amp;ixlib=rb-4.0.3&amp;q=80&amp;w=1080&quot;,&quot;cover_image_alt&quot;:null,&quot;canonical_url&quot;:&quot;https://tekk.substack.com/p/unpacking-esg-why-cybersecurity-deserves&quot;,&quot;section_name&quot;:null,&quot;video_upload_id&quot;:null,&quot;id&quot;:148058118,&quot;type&quot;:&quot;newsletter&quot;,&quot;reaction_count&quot;:0,&quot;comment_count&quot;:0,&quot;publication_id&quot;:null,&quot;publication_name&quot;:&quot;Dennis&#8217;s Substack&quot;,&quot;publication_logo_url&quot;:&quot;https://substackcdn.com/image/fetch/f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fd1faa622-157f-4c11-b86f-5cc03a32ca10_144x144.png&quot;,&quot;belowTheFold&quot;:true,&quot;youtube_url&quot;:null,&quot;show_links&quot;:null,&quot;feed_url&quot;:null}"></div><p> </p><div class="footnote" data-component-name="FootnoteToDOM"><a id="footnote-1" href="#footnote-anchor-1" class="footnote-number" contenteditable="false" target="_self">1</a><div class="footnote-content"><p>Kibsey, S., Kibsey, S.D., Addas, A., Krosinsky, C. (2020). The Evolving Risk Management Opportunity and Thinking Sustainability First. In: Walker, T., Gramlich, D., Bitar, M., Fardnia, P. (eds) Ecological, Societal, and Technological Risks and the Financial Sector. Palgrave Studies in Sustainable Business In Association with Future Earth. Palgrave Macmillan, Cham. https://doi.org/10.1007/978-3-030-38858-4_6</p><p></p></div></div><div class="footnote" data-component-name="FootnoteToDOM"><a id="footnote-2" href="#footnote-anchor-2" class="footnote-number" contenteditable="false" target="_self">2</a><div class="footnote-content"><p><a href="https://www.emerald.com/insight/search?q=Syed%20Quaid%20Ali%20Shah">Shah, S.Q.A.</a>, <a href="https://www.emerald.com/insight/search?q=Fong-Woon%20Lai">Lai, F.-W.</a>, <a href="https://www.emerald.com/insight/search?q=Muhammad%20Kashif%20Shad">Shad, M.K.</a>, <a href="https://www.emerald.com/insight/search?q=Salaheldin%20Hamad">Hamad, S.</a> and <a href="https://www.emerald.com/insight/search?q=Nejla%20Ould%20Daoud%20Ellili">Ellili, N.O.D.</a> (2024), "Exploring the effect of enterprise risk management for ESG risks towards green growth", <em><a href="https://www.emerald.com/insight/publication/issn/1741-0401">International Journal of Productivity and Performance Management</a></em>, Vol. ahead-of-print No. ahead-of-print. <a href="https://doi.org/10.1108/IJPPM-10-2023-0582">https://doi.org/10.1108/IJPPM-10-2023-0582</a></p><p></p></div></div>]]></content:encoded></item><item><title><![CDATA[Unpacking ESG: Why Cybersecurity Deserves a Seat at the Table]]></title><description><![CDATA[As the world continues to navigate the challenges of climate change, social inequality, and economic disruption, the role of Environmental, Social, and Governance (ESG) frameworks in guiding corporate responsibility has never been more crucial.]]></description><link>https://www.tekk-talk.com/p/unpacking-esg-why-cybersecurity-deserves</link><guid isPermaLink="false">https://www.tekk-talk.com/p/unpacking-esg-why-cybersecurity-deserves</guid><dc:creator><![CDATA[Dennis Lindwall]]></dc:creator><pubDate>Fri, 23 Aug 2024 22:26:48 GMT</pubDate><enclosure url="https://images.unsplash.com/photo-1523287562758-66c7fc58967f?crop=entropy&amp;cs=tinysrgb&amp;fit=max&amp;fm=jpg&amp;ixid=M3wzMDAzMzh8MHwxfHNlYXJjaHw1Nnx8Y29ycG9yYXRlJTIwZXNnfGVufDB8fHx8MTcyNDQ1MTkwN3ww&amp;ixlib=rb-4.0.3&amp;q=80&amp;w=1080" length="0" type="image/jpeg"/><content:encoded><![CDATA[<div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://images.unsplash.com/photo-1523287562758-66c7fc58967f?crop=entropy&amp;cs=tinysrgb&amp;fit=max&amp;fm=jpg&amp;ixid=M3wzMDAzMzh8MHwxfHNlYXJjaHw1Nnx8Y29ycG9yYXRlJTIwZXNnfGVufDB8fHx8MTcyNDQ1MTkwN3ww&amp;ixlib=rb-4.0.3&amp;q=80&amp;w=1080" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://images.unsplash.com/photo-1523287562758-66c7fc58967f?crop=entropy&amp;cs=tinysrgb&amp;fit=max&amp;fm=jpg&amp;ixid=M3wzMDAzMzh8MHwxfHNlYXJjaHw1Nnx8Y29ycG9yYXRlJTIwZXNnfGVufDB8fHx8MTcyNDQ1MTkwN3ww&amp;ixlib=rb-4.0.3&amp;q=80&amp;w=1080 424w, https://images.unsplash.com/photo-1523287562758-66c7fc58967f?crop=entropy&amp;cs=tinysrgb&amp;fit=max&amp;fm=jpg&amp;ixid=M3wzMDAzMzh8MHwxfHNlYXJjaHw1Nnx8Y29ycG9yYXRlJTIwZXNnfGVufDB8fHx8MTcyNDQ1MTkwN3ww&amp;ixlib=rb-4.0.3&amp;q=80&amp;w=1080 848w, https://images.unsplash.com/photo-1523287562758-66c7fc58967f?crop=entropy&amp;cs=tinysrgb&amp;fit=max&amp;fm=jpg&amp;ixid=M3wzMDAzMzh8MHwxfHNlYXJjaHw1Nnx8Y29ycG9yYXRlJTIwZXNnfGVufDB8fHx8MTcyNDQ1MTkwN3ww&amp;ixlib=rb-4.0.3&amp;q=80&amp;w=1080 1272w, https://images.unsplash.com/photo-1523287562758-66c7fc58967f?crop=entropy&amp;cs=tinysrgb&amp;fit=max&amp;fm=jpg&amp;ixid=M3wzMDAzMzh8MHwxfHNlYXJjaHw1Nnx8Y29ycG9yYXRlJTIwZXNnfGVufDB8fHx8MTcyNDQ1MTkwN3ww&amp;ixlib=rb-4.0.3&amp;q=80&amp;w=1080 1456w" sizes="100vw"><img src="https://images.unsplash.com/photo-1523287562758-66c7fc58967f?crop=entropy&amp;cs=tinysrgb&amp;fit=max&amp;fm=jpg&amp;ixid=M3wzMDAzMzh8MHwxfHNlYXJjaHw1Nnx8Y29ycG9yYXRlJTIwZXNnfGVufDB8fHx8MTcyNDQ1MTkwN3ww&amp;ixlib=rb-4.0.3&amp;q=80&amp;w=1080" width="5760" height="3840" data-attrs="{&quot;src&quot;:&quot;https://images.unsplash.com/photo-1523287562758-66c7fc58967f?crop=entropy&amp;cs=tinysrgb&amp;fit=max&amp;fm=jpg&amp;ixid=M3wzMDAzMzh8MHwxfHNlYXJjaHw1Nnx8Y29ycG9yYXRlJTIwZXNnfGVufDB8fHx8MTcyNDQ1MTkwN3ww&amp;ixlib=rb-4.0.3&amp;q=80&amp;w=1080&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:3840,&quot;width&quot;:5760,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:null,&quot;alt&quot;:&quot;low-angle photography of man in the middle of buidligns&quot;,&quot;title&quot;:null,&quot;type&quot;:&quot;image/jpg&quot;,&quot;href&quot;:null,&quot;belowTheFold&quot;:false,&quot;topImage&quot;:true,&quot;internalRedirect&quot;:null,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="low-angle photography of man in the middle of buidligns" title="low-angle photography of man in the middle of buidligns" srcset="https://images.unsplash.com/photo-1523287562758-66c7fc58967f?crop=entropy&amp;cs=tinysrgb&amp;fit=max&amp;fm=jpg&amp;ixid=M3wzMDAzMzh8MHwxfHNlYXJjaHw1Nnx8Y29ycG9yYXRlJTIwZXNnfGVufDB8fHx8MTcyNDQ1MTkwN3ww&amp;ixlib=rb-4.0.3&amp;q=80&amp;w=1080 424w, https://images.unsplash.com/photo-1523287562758-66c7fc58967f?crop=entropy&amp;cs=tinysrgb&amp;fit=max&amp;fm=jpg&amp;ixid=M3wzMDAzMzh8MHwxfHNlYXJjaHw1Nnx8Y29ycG9yYXRlJTIwZXNnfGVufDB8fHx8MTcyNDQ1MTkwN3ww&amp;ixlib=rb-4.0.3&amp;q=80&amp;w=1080 848w, https://images.unsplash.com/photo-1523287562758-66c7fc58967f?crop=entropy&amp;cs=tinysrgb&amp;fit=max&amp;fm=jpg&amp;ixid=M3wzMDAzMzh8MHwxfHNlYXJjaHw1Nnx8Y29ycG9yYXRlJTIwZXNnfGVufDB8fHx8MTcyNDQ1MTkwN3ww&amp;ixlib=rb-4.0.3&amp;q=80&amp;w=1080 1272w, https://images.unsplash.com/photo-1523287562758-66c7fc58967f?crop=entropy&amp;cs=tinysrgb&amp;fit=max&amp;fm=jpg&amp;ixid=M3wzMDAzMzh8MHwxfHNlYXJjaHw1Nnx8Y29ycG9yYXRlJTIwZXNnfGVufDB8fHx8MTcyNDQ1MTkwN3ww&amp;ixlib=rb-4.0.3&amp;q=80&amp;w=1080 1456w" sizes="100vw" fetchpriority="high"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a><figcaption class="image-caption">Photo by <a href="true">Razvan Chisu</a> on <a href="https://unsplash.com">Unsplash</a></figcaption></figure></div><p>As the world continues to navigate the challenges of climate change, social inequality, and economic disruption, the role of Environmental, Social, and Governance (ESG) frameworks in guiding corporate responsibility has never been more crucial. With an increasing focus by the general public, customers and investors on climate change, ethical and socially responsible investments, no company in the public eye can afford to ignore ESG. And more importantly, ESG has evolved from a niche concern for socially conscious investors and has become a mainstream imperative for companies seeking to demonstrate their commitment to sustainable practices and long-term value creation, increasingly becoming a moral and existential imperative. However, as most businesses increasingly operate in a digital-first world, away from the traditional &#8220;bricks and mortar&#8221; operations, a critical element is often missing from these frameworks: cybersecurity.</p><div class="pullquote"><p><em><strong>"ESG has evolved from a niche concern for socially conscious investors and has become a mainstream imperative for companies seeking to demonstrate their commitment to sustainable practices and long-term value creation."</strong></em></p></div><p>This article explores the growing importance of ESG for corporations and makes the case for why cybersecurity should be considered a distinct component within ESG metrics and assessments. In doing so, we aim to provide a comprehensive view of organizational sustainability in the digital age and highlight the strategic value of robust cybersecurity practices.</p><div class="subscription-widget-wrap-editor" data-attrs="{&quot;url&quot;:&quot;https://www.tekk-talk.com/subscribe?&quot;,&quot;text&quot;:&quot;Subscribe&quot;,&quot;language&quot;:&quot;en&quot;}" data-component-name="SubscribeWidgetToDOM"><div class="subscription-widget show-subscribe"><div class="preamble"><p class="cta-caption">Thanks for reading Dennis&#8217;s Substack! Subscribe for free to receive new posts and support my work.</p></div><form class="subscription-widget-subscribe"><input type="email" class="email-input" name="email" placeholder="Type your email&#8230;" tabindex="-1"><input type="submit" class="button primary" value="Subscribe"><div class="fake-input-wrapper"><div class="fake-input"></div><div class="fake-button"></div></div></form></div></div><h1><strong>Why ESG Matters for Corporations Today</strong></h1><h3><strong>Investor Expectations and Market Demands</strong></h3><p>Investor expectations have shifted dramatically in recent years. ESG performance is now seen as a key indicator of a company's long-term viability and risk management capabilities. Institutional investors, such as pension funds and mutual funds, are increasingly incorporating ESG criteria into their investment decisions, recognizing that companies with strong ESG performance are often better positioned to mitigate risks and capitalize on opportunities. And looking at it another way, companies are also being de-selected from investment opportunities and fund portfolios because of a lack of robust ESG commitment.</p><p>Furthermore, individual investors are also becoming more discerning, looking beyond traditional financial metrics to assess a company&#8217;s ethical footprint and commitment to sustainability. ESG metrics have become an essential part of the due diligence process, influencing everything from stock prices to corporate valuations. Companies that perform well on ESG metrics are often rewarded with lower costs of capital and enhanced access to markets.</p><h3><strong>Regulatory Pressures and Compliance Requirements</strong></h3><p>Governments and regulatory bodies worldwide are imposing stricter reporting requirements related to ESG. In the European Union, for example, the Sustainable Finance Disclosure Regulation (SFDR) mandates that financial market participants provide detailed disclosures on how they integrate ESG factors into their investment processes. Similarly, the U.S. Securities and Exchange Commission (SEC) has proposed rules that would require companies to disclose climate-related risks and their impact on business operations.</p><p>These regulatory pressures underscore the growing importance of ESG for corporate governance and compliance. Companies that fail to meet these requirements risk facing significant penalties, reputational damage, and loss of investor confidence.</p><h3><strong>Reputation and Brand Value</strong></h3><p>A company&#8217;s reputation and brand value are increasingly tied to its ESG performance. Consumers and employees are more likely to support companies that demonstrate a strong commitment to environmental stewardship, social responsibility, and ethical governance. This shift reflects a broader societal trend towards valuing sustainability and ethical conduct.</p><p>Companies with robust ESG strategies are often perceived as more trustworthy, ethical, and forward-thinking. This perception can enhance customer loyalty, attract top talent, and provide a competitive advantage in the marketplace. Conversely, poor ESG performance can lead to negative publicity, boycotts, and a loss of market share.</p><p><strong>The Digital Age and the Need for a New Approach to ESG</strong></p><div class="pullquote"><p><em><strong>"Cybersecurity is not merely an IT concern; it is a fundamental component of corporate governance and risk management."</strong></em></p></div><h3><strong>Digital Transformation and Emerging Risks</strong></h3><p>As businesses undergo digital transformation, they become increasingly reliant on digital technologies to drive innovation, efficiency, and growth. However, this shift also introduces new risks that traditional ESG frameworks do not fully capture. Cybersecurity threats, such as data breaches, ransomware attacks, and cyber-espionage, pose significant risks to a company&#8217;s operations, reputation, and stakeholder trust.</p><p>These risks are not just theoretical. High-profile cybersecurity incidents, such as the data breaches at Equifax and Marriott, have demonstrated the devastating impact of cyberattacks on a company&#8217;s financial performance and reputation. Such incidents also highlight the need for robust cybersecurity practices as a key component of corporate resilience and sustainability.</p><h3><strong>The Strategic Importance of Cybersecurity</strong></h3><p>Cybersecurity is not merely an IT concern; it is a fundamental component of corporate governance and risk management. Effective cybersecurity measures protect not only a company&#8217;s data but also its intellectual property, operational continuity, and compliance with regulatory requirements. Moreover, cybersecurity is essential for maintaining customer trust and investor confidence, particularly in a world where data privacy and security are becoming increasingly important.</p><p>By safeguarding digital assets and infrastructure, cybersecurity supports a company&#8217;s broader sustainability goals as well as its resilience. It enables organizations to innovate with confidence, knowing that their digital investments are protected from emerging threats. This strategic importance makes a compelling case for why cybersecurity deserves a seat at the ESG table.</p><h1><strong>Why Cybersecurity Deserves Separate Consideration in ESG</strong></h1><div class="pullquote"><p><em><strong>"Cybersecurity intersects with all three pillars of ESG, reinforcing its relevance as a separate consideration within these frameworks."</strong></em></p></div><h3><strong>Distinct from General IT Concerns</strong></h3><p>Cybersecurity should be separated from general IT concerns in ESG assessments because it represents a unique risk profile with its own set of challenges, strategies, and impacts. Unlike traditional IT concerns, which focus on the efficiency and functionality of technology systems, cybersecurity specifically addresses the protection of these systems from malicious attacks and unauthorized access.</p><p>Cybersecurity involves managing a dynamic threat landscape, where attackers are constantly evolving their tactics to exploit vulnerabilities. This requires continuous adaptation and proactive risk management; qualities that align closely with the principles of ESG. By treating cybersecurity as a distinct component within ESG frameworks, companies can more accurately assess their digital resilience and align their cybersecurity strategies with their broader sustainability objectives.</p><h3><strong>Impact Across All ESG Pillars</strong></h3><p>Cybersecurity intersects with all three pillars of ESG, reinforcing its relevance as a separate consideration within these frameworks:</p><ul><li><p><strong>Environmental Impact:</strong> Cybersecurity contributes to environmental sustainability by supporting energy-efficient data management practices and protecting critical infrastructure. Secure digital processes can reduce the need for physical resources, lowering overall environmental impact.</p></li><li><p><strong>Social Impact:</strong> Cybersecurity plays a vital role in social responsibility by safeguarding personal data, ensuring the continuity of essential services, and protecting stakeholder interests. Strong cybersecurity practices help maintain trust among customers, employees, and partners, which is crucial for social sustainability.</p></li><li><p><strong>Governance Impact:</strong> Robust cybersecurity is a key indicator of good governance, reflecting a company&#8217;s commitment to risk management, transparency, and regulatory compliance. Companies with strong cybersecurity practices demonstrate their ability to proactively address digital risks and maintain high standards of corporate governance.</p></li></ul><h1><strong>The Risks of Ignoring Cybersecurity in ESG Assessments</strong></h1><h3><strong>Incomplete Risk Assessment</strong></h3><p>Excluding cybersecurity from ESG assessments leads to incomplete risk evaluations, potentially exposing investors and other stakeholders to unforeseen risks. In today&#8217;s digital economy, where cyber threats are becoming more sophisticated and frequent, failing to consider cybersecurity as a separate ESG metric could result in significant financial losses and reputational damage.</p><h3><strong>Undervaluation of Strategic Investments</strong></h3><p>Without dedicated metrics for cybersecurity, companies may underinvest in this critical area, undermining their long-term resilience and sustainability. A lack of understanding of the total cost of ownership (TCO) for IT services and operations can lead to the undervaluation of cybersecurity investments. By recognizing the value of cybersecurity in reducing long-term costs and enhancing operational efficiency, companies can better justify these investments and improve their ESG scores.</p><h3><strong>Eroding Stakeholder Trust</strong></h3><p>Failing to prioritize cybersecurity within ESG frameworks can erode trust among customers, investors, and partners, particularly in a world where data breaches and cyberattacks are becoming increasingly common. Transparency in cybersecurity practices and proactive communication with stakeholders are essential for building and maintaining trust, which is a cornerstone of strong ESG performance.</p><h1><strong>Building a Case for Integrating Cybersecurity into ESG</strong></h1><h3><strong>Developing Standardized Cybersecurity Metrics</strong></h3><p>To integrate cybersecurity effectively into ESG frameworks, there is a need for standardized metrics and frameworks that can provide a consistent and comprehensive approach to evaluating a company&#8217;s digital resilience. Metrics such as the Cyber Resilience Score, Data Protection Index, and Cyber Governance Maturity can help create benchmarks for assessing cybersecurity performance across industries.</p><h3><strong>Engaging Stakeholders</strong></h3><p>Companies should actively engage with stakeholders on cybersecurity issues, emphasizing transparency and proactive risk management as key elements of a strong ESG strategy. Regular cybersecurity briefings, detailed disclosures in sustainability reports, and proactive communication strategies can help build trust and demonstrate a commitment to digital security.</p><h3><strong>Adapting to a Dynamic Landscape</strong></h3><p>ESG frameworks must be dynamic and adaptable, capable of evolving with the rapidly changing cybersecurity threat landscape. Companies need to adopt a continuous improvement approach to cybersecurity, integrating lessons learned from past incidents and staying ahead of emerging threats. This adaptability is crucial for ensuring that ESG assessments remain relevant and reflective of the risks and opportunities facing businesses today.</p><div class="pullquote"><p><em><strong>"In an increasingly digital world, where the lines between the physical and digital realms are becoming blurred, cybersecurity must be seen as a fundamental component of corporate responsibility."</strong></em></p></div><h1><strong>A Call to Action for Corporations and Investors</strong></h1><p>Our journey into the digital age has clearly shown the world that the need to integrate cybersecurity into ESG frameworks becomes increasingly apparent, if not obligatory. It is time for corporations to recognize that cybersecurity is not merely a component of IT but a critical element of their sustainability strategy. By viewing cybersecurity as a separate consideration within ESG assessments, companies can better align their digital defense strategies with their broader sustainability goals, ultimately fostering greater resilience and trust among stakeholders.</p><p>Investors, regulators, and other stakeholders must also recognize the critical role of cybersecurity in shaping a company&#8217;s long-term viability and responsible business practices. By embracing this new paradigm, we can ensure that ESG assessments remain relevant and reflective of the risks and opportunities facing businesses today.</p><p>In our digital world, where the lines between the physical and digital realms are becoming blurred, cybersecurity must be seen as a fundamental component of corporate responsibility. Companies that excel in integrating cybersecurity into their ESG frameworks will not only protect themselves against evolving threats but also position themselves as leaders in sustainable and responsible business practices for the future.</p><p>This article is part of a series of articles written with a focus on Cybersecurity and ESG. Here is the second article in the series:</p><div class="digest-post-embed" data-attrs="{&quot;nodeId&quot;:&quot;3236812c-79d3-48c6-ab24-d84810f7c774&quot;,&quot;caption&quot;:&quot;In an era where digital transformation intersects with corporate responsibility, cybersecurity has become an increasingly significant factor in shaping an organisation's sustainability profile. Traditionally, sustainability has been associated with renewable energy initiatives and efforts to reduce carbon footprints. However, in our interconnected world&#8230;&quot;,&quot;cta&quot;:null,&quot;showBylines&quot;:true,&quot;size&quot;:&quot;lg&quot;,&quot;isEditorNode&quot;:true,&quot;title&quot;:&quot;Cybersecurity's Impact on ESG&quot;,&quot;publishedBylines&quot;:[{&quot;id&quot;:58943547,&quot;name&quot;:&quot;Dennis Lindwall&quot;,&quot;bio&quot;:&quot;Banking and Fintech aficionado. Passionately interested in the strategic challenges presented by digital disruption in the financial services industry, regtech and cybersecurity. \nCISO | Ops Resilience | Risk &amp; Governance | FinTech | Consulting&quot;,&quot;photo_url&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/34f5dd2d-036c-4391-a0af-8930cca6f200_1024x1024.webp&quot;,&quot;is_guest&quot;:false,&quot;bestseller_tier&quot;:null}],&quot;post_date&quot;:&quot;2024-08-24T18:17:34.901Z&quot;,&quot;cover_image&quot;:&quot;https://substackcdn.com/image/fetch/f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F1612622f-828f-428b-a44f-1012e917e315_1792x1024.webp&quot;,&quot;cover_image_alt&quot;:null,&quot;canonical_url&quot;:&quot;https://tekk.substack.com/p/cybersecurity-sustainability-esg-impact&quot;,&quot;section_name&quot;:null,&quot;video_upload_id&quot;:null,&quot;id&quot;:148082937,&quot;type&quot;:&quot;newsletter&quot;,&quot;reaction_count&quot;:0,&quot;comment_count&quot;:0,&quot;publication_id&quot;:null,&quot;publication_name&quot;:&quot;TEKK Talk&quot;,&quot;publication_logo_url&quot;:&quot;https://substackcdn.com/image/fetch/f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fcaafd59c-454f-46f0-9fef-77ca47321c13_1024x1024.png&quot;,&quot;belowTheFold&quot;:true,&quot;youtube_url&quot;:null,&quot;show_links&quot;:null,&quot;feed_url&quot;:null}"></div><p></p><div class="subscription-widget-wrap-editor" data-attrs="{&quot;url&quot;:&quot;https://www.tekk-talk.com/subscribe?&quot;,&quot;text&quot;:&quot;Subscribe&quot;,&quot;language&quot;:&quot;en&quot;}" data-component-name="SubscribeWidgetToDOM"><div class="subscription-widget show-subscribe"><div class="preamble"><p class="cta-caption">Thanks for reading Dennis&#8217;s Substack! Subscribe for free to receive new posts and support my work.</p></div><form class="subscription-widget-subscribe"><input type="email" class="email-input" name="email" placeholder="Type your email&#8230;" tabindex="-1"><input type="submit" class="button primary" value="Subscribe"><div class="fake-input-wrapper"><div class="fake-input"></div><div class="fake-button"></div></div></form></div></div>]]></content:encoded></item><item><title><![CDATA[The Easy Hack You Never Saw Coming]]></title><description><![CDATA[Basic Authentication]]></description><link>https://www.tekk-talk.com/p/the-easy-hack-you-never-saw-coming</link><guid isPermaLink="false">https://www.tekk-talk.com/p/the-easy-hack-you-never-saw-coming</guid><dc:creator><![CDATA[Dennis Lindwall]]></dc:creator><pubDate>Fri, 23 Aug 2024 22:05:40 GMT</pubDate><enclosure url="https://substackcdn.com/image/fetch/$s_!GfQD!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Ffaa040ac-a158-4233-ad5a-147d6eaeb394_1152x640.jpeg" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p></p><div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!GfQD!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Ffaa040ac-a158-4233-ad5a-147d6eaeb394_1152x640.jpeg" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!GfQD!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Ffaa040ac-a158-4233-ad5a-147d6eaeb394_1152x640.jpeg 424w, https://substackcdn.com/image/fetch/$s_!GfQD!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Ffaa040ac-a158-4233-ad5a-147d6eaeb394_1152x640.jpeg 848w, https://substackcdn.com/image/fetch/$s_!GfQD!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Ffaa040ac-a158-4233-ad5a-147d6eaeb394_1152x640.jpeg 1272w, https://substackcdn.com/image/fetch/$s_!GfQD!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Ffaa040ac-a158-4233-ad5a-147d6eaeb394_1152x640.jpeg 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!GfQD!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Ffaa040ac-a158-4233-ad5a-147d6eaeb394_1152x640.jpeg" width="1152" height="640" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/faa040ac-a158-4233-ad5a-147d6eaeb394_1152x640.jpeg&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:&quot;normal&quot;,&quot;height&quot;:640,&quot;width&quot;:1152,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:null,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:null,&quot;href&quot;:null,&quot;belowTheFold&quot;:false,&quot;topImage&quot;:true,&quot;internalRedirect&quot;:null,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!GfQD!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Ffaa040ac-a158-4233-ad5a-147d6eaeb394_1152x640.jpeg 424w, https://substackcdn.com/image/fetch/$s_!GfQD!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Ffaa040ac-a158-4233-ad5a-147d6eaeb394_1152x640.jpeg 848w, https://substackcdn.com/image/fetch/$s_!GfQD!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Ffaa040ac-a158-4233-ad5a-147d6eaeb394_1152x640.jpeg 1272w, https://substackcdn.com/image/fetch/$s_!GfQD!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Ffaa040ac-a158-4233-ad5a-147d6eaeb394_1152x640.jpeg 1456w" sizes="100vw" fetchpriority="high"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a><figcaption class="image-caption">hacker</figcaption></figure></div><p>As cyber threats continue to advance, securing systems with strong authentication remains essential. Yet, hidden within many external facing systems around the globe is a vulnerability so basic that it often goes unnoticed. Basic Authentication, a method from the early days of the internet, still poses a significant security risk in many organizations, providing an easy entry point for cybercriminals that is frequently overlooked.</p><p><strong>Overview of Basic Authentication</strong></p><div class="subscription-widget-wrap-editor" data-attrs="{&quot;url&quot;:&quot;https://www.tekk-talk.com/subscribe?&quot;,&quot;text&quot;:&quot;Subscribe&quot;,&quot;language&quot;:&quot;en&quot;}" data-component-name="SubscribeWidgetToDOM"><div class="subscription-widget show-subscribe"><div class="preamble"><p class="cta-caption">Thanks for reading Dennis&#8217;s Substack! Subscribe for free to receive new posts and support my work.</p></div><form class="subscription-widget-subscribe"><input type="email" class="email-input" name="email" placeholder="Type your email&#8230;" tabindex="-1"><input type="submit" class="button primary" value="Subscribe"><div class="fake-input-wrapper"><div class="fake-input"></div><div class="fake-button"></div></div></form></div></div><p>Basic Authentication, outlined in the HTTP specification, is a straightforward authentication method built into the HTTP protocol. It works by sending credentials as user ID/password pairs, encoded in base64, within the HTTP Authorization header. Initially favoured for its ease of use during the early stages of web development, Basic Authentication was once a common approach to securing web applications.</p><p>However, the simplicity that made Basic Authentication appealing in the past now makes it inadequate for today&#8217;s security needs. Its continued presence in legacy systems presents a significant risk that many organizations either overlook or underestimate.</p><p><strong>Key Risks Associated with Basic Authentication</strong></p><p>Basic Authentication has several serious vulnerabilities:</p><ul><li><p><strong>Lack of Encryption:</strong> Base64 encoding merely obscures credentials; it does not protect them. This encoding can be easily decoded, exposing credentials to anyone who intercepts the HTTP request.</p></li><li><p><strong>Susceptibility to Credential Interception:</strong> Without additional security layers like TLS/SSL, Basic Authentication transmits credentials with every request, increasing the risk of man-in-the-middle attacks.</p></li><li><p><strong>Vulnerability to Replay Attacks:</strong> If credentials are intercepted, attackers can reuse them to gain unauthorized access, as Basic Authentication lacks built-in protections against replay attacks.</p></li><li><p><strong>Inadequate Credential Storage:</strong> Some implementations store passwords in plain text or use weak hashing algorithms, increasing the risk if the credential store is compromised.</p></li><li><p><strong>Absence of Modern Security Features:</strong> Basic Authentication does not support critical security measures like multi-factor authentication (MFA), password complexity requirements, or robust session management.</p></li></ul><p>These weaknesses expose organizations to significant risks, including unauthorized access, data breaches, and potential compliance violations.</p><p><strong>Real-World Implications</strong></p><p>Relying on Basic Authentication has led to severe consequences for many organizations. For example, in 2019, a major U.S. healthcare provider experienced a data breach affecting over 20 million patients. The breach was caused by an exposed server using Basic Authentication, which allowed attackers easy access to sensitive medical records. This incident highlights how a simple oversight in authentication can lead to massive data exposure.</p><p>Similarly, in 2020, a prominent e-commerce platform suffered a significant breach where customer data was stolen over several months. Investigators discovered that the attackers initially gained access through a legacy API endpoint still using Basic Authentication. This case underscores how overlooked vulnerabilities in authentication can lead to prolonged, undetected access to sensitive systems.</p><p>These incidents illustrate not only the immediate impact of data loss and operational disruption but also the long-term effects of eroded customer trust and potential regulatory penalties. The healthcare provider, for instance, faced not only reputational damage but also scrutiny under GDPR as well as HIPAA regulations, which require stringent protection of patient data.</p><p><strong>Signs You Might Be Using Insecure Basic Authentication</strong></p><p>It&#8217;s crucial to identify whether your systems are still using Basic Authentication. Look for these signs:</p><ul><li><p>The presence of "Authorization: Basic" headers in HTTP requests.</p></li><li><p>Absence of HTTPS (TLS/SSL) encryption in communications.</p></li><li><p>Legacy systems or APIs that haven&#8217;t been updated in years.</p></li><li><p>Lack of additional authentication factors beyond username and password.</p></li></ul><p>Regular security audits are essential for identifying and addressing these vulnerabilities before they can be exploited.</p><p><strong>Alternatives to Basic Authentication</strong></p><p>There are several more secure alternatives to Basic Authentication:</p><ul><li><p><strong>OAuth 2.0:</strong> An authorization framework that allows applications to gain limited access to user accounts on an HTTP service.</p></li><li><p><strong>OpenID Connect:</strong> Built on top of OAuth 2.0, it adds an identity layer, enabling clients to verify the identity of the end-user.</p></li><li><p><strong>JSON Web Tokens (JWTs):</strong> A compact, URL-safe way of representing claims to be transferred between two parties, commonly used for session management and information exchange in web development.</p></li></ul><p>These methods provide enhanced security through features like token-based authentication, support for MFA, and improved session management.</p><p><strong>Steps for Transitioning Away from Basic Authentication</strong></p><p>Moving away from Basic Authentication requires careful planning:</p><ol><li><p><strong>Audit Current Systems:</strong> Identify all instances of Basic Authentication within your infrastructure.</p></li><li><p><strong>Choose a Suitable Alternative:</strong> Select an authentication method that aligns with your security needs and system architecture.</p></li><li><p><strong>Develop a Migration Strategy:</strong> Plan the transition, considering factors like user impact, system downtime, and resource allocation.</p></li><li><p><strong>Implement in Phases:</strong> Start with non-critical systems to minimize disruption and refine the process.</p></li><li><p><strong>Update Client Applications:</strong> Modify client-side code to support the new authentication method.</p></li><li><p><strong>Conduct Thorough Testing:</strong> Ensure the new authentication system functions correctly and doesn&#8217;t introduce new vulnerabilities.</p></li><li><p><strong>Gradually Phase Out Basic Authentication:</strong> Once the new system is reliable, start disabling Basic Authentication, beginning with the least critical systems.</p></li><li><p><strong>Monitor and Adjust:</strong> Continuously monitor the new authentication system for any issues and make adjustments as necessary.</p></li></ol><p><strong>Conclusion</strong></p><p>The ongoing use of Basic Authentication in today&#8217;s digital environment represents a serious and often underestimated security threat. As cyber threats continue to evolve, relying on this outdated authentication method is like leaving the front door of your digital infrastructure wide open.</p><p>Switching to more secure authentication methods is not just advisable; it&#8217;s essential for any organization serious about protecting its assets, data, and reputation. By adopting modern authentication protocols, organizations can significantly strengthen their security, reduce the risk of breaches, and build a foundation of trust in their digital interactions.</p><p>Now is the time to act. Start by assessing your if Basic Authentication is in use in your IT estate and validate all systems and applications. Prioritize migrating to more secure authentication methods, beginning with your most critical assets. Engage with security professionals to develop a comprehensive transition plan tailored to your organization&#8217;s needs and resources.</p><p>For further guidance, explore resources like OWASP&#8217;s Authentication Cheat Sheet, NIST&#8217;s Digital Identity Guidelines, and implementation guides for OAuth 2.0 and JWT. In cybersecurity, proactive steps today can prevent catastrophic breaches tomorrow.</p><div class="subscription-widget-wrap-editor" data-attrs="{&quot;url&quot;:&quot;https://www.tekk-talk.com/subscribe?&quot;,&quot;text&quot;:&quot;Subscribe&quot;,&quot;language&quot;:&quot;en&quot;}" data-component-name="SubscribeWidgetToDOM"><div class="subscription-widget show-subscribe"><div class="preamble"><p class="cta-caption">Thanks for reading Dennis&#8217;s Substack! Subscribe for free to receive new posts and support my work.</p></div><form class="subscription-widget-subscribe"><input type="email" class="email-input" name="email" placeholder="Type your email&#8230;" tabindex="-1"><input type="submit" class="button primary" value="Subscribe"><div class="fake-input-wrapper"><div class="fake-input"></div><div class="fake-button"></div></div></form></div></div>]]></content:encoded></item><item><title><![CDATA[Navigating the AI Frontier in Cybersecurity ]]></title><description><![CDATA[Lessons from Microsoft and OpenAI]]></description><link>https://www.tekk-talk.com/p/navigating-the-ai-frontier-in-cybersecurity</link><guid isPermaLink="false">https://www.tekk-talk.com/p/navigating-the-ai-frontier-in-cybersecurity</guid><dc:creator><![CDATA[Dennis Lindwall]]></dc:creator><pubDate>Mon, 19 Feb 2024 20:41:36 GMT</pubDate><enclosure url="https://images.unsplash.com/photo-1488229297570-58520851e868?crop=entropy&amp;cs=tinysrgb&amp;fit=max&amp;fm=jpg&amp;ixid=M3wzMDAzMzh8MHwxfHNlYXJjaHwyMnx8ZGlnaXRhbCUyMGV2b2x1dGlvbnxlbnwwfHx8fDE3MjQ0NTU0MjN8MA&amp;ixlib=rb-4.0.3&amp;q=80&amp;w=1080" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p></p><div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://images.unsplash.com/photo-1488229297570-58520851e868?crop=entropy&amp;cs=tinysrgb&amp;fit=max&amp;fm=jpg&amp;ixid=M3wzMDAzMzh8MHwxfHNlYXJjaHwyMnx8ZGlnaXRhbCUyMGV2b2x1dGlvbnxlbnwwfHx8fDE3MjQ0NTU0MjN8MA&amp;ixlib=rb-4.0.3&amp;q=80&amp;w=1080" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://images.unsplash.com/photo-1488229297570-58520851e868?crop=entropy&amp;cs=tinysrgb&amp;fit=max&amp;fm=jpg&amp;ixid=M3wzMDAzMzh8MHwxfHNlYXJjaHwyMnx8ZGlnaXRhbCUyMGV2b2x1dGlvbnxlbnwwfHx8fDE3MjQ0NTU0MjN8MA&amp;ixlib=rb-4.0.3&amp;q=80&amp;w=1080 424w, https://images.unsplash.com/photo-1488229297570-58520851e868?crop=entropy&amp;cs=tinysrgb&amp;fit=max&amp;fm=jpg&amp;ixid=M3wzMDAzMzh8MHwxfHNlYXJjaHwyMnx8ZGlnaXRhbCUyMGV2b2x1dGlvbnxlbnwwfHx8fDE3MjQ0NTU0MjN8MA&amp;ixlib=rb-4.0.3&amp;q=80&amp;w=1080 848w, https://images.unsplash.com/photo-1488229297570-58520851e868?crop=entropy&amp;cs=tinysrgb&amp;fit=max&amp;fm=jpg&amp;ixid=M3wzMDAzMzh8MHwxfHNlYXJjaHwyMnx8ZGlnaXRhbCUyMGV2b2x1dGlvbnxlbnwwfHx8fDE3MjQ0NTU0MjN8MA&amp;ixlib=rb-4.0.3&amp;q=80&amp;w=1080 1272w, https://images.unsplash.com/photo-1488229297570-58520851e868?crop=entropy&amp;cs=tinysrgb&amp;fit=max&amp;fm=jpg&amp;ixid=M3wzMDAzMzh8MHwxfHNlYXJjaHwyMnx8ZGlnaXRhbCUyMGV2b2x1dGlvbnxlbnwwfHx8fDE3MjQ0NTU0MjN8MA&amp;ixlib=rb-4.0.3&amp;q=80&amp;w=1080 1456w" sizes="100vw"><img src="https://images.unsplash.com/photo-1488229297570-58520851e868?crop=entropy&amp;cs=tinysrgb&amp;fit=max&amp;fm=jpg&amp;ixid=M3wzMDAzMzh8MHwxfHNlYXJjaHwyMnx8ZGlnaXRhbCUyMGV2b2x1dGlvbnxlbnwwfHx8fDE3MjQ0NTU0MjN8MA&amp;ixlib=rb-4.0.3&amp;q=80&amp;w=1080" width="4195" height="2802" data-attrs="{&quot;src&quot;:&quot;https://images.unsplash.com/photo-1488229297570-58520851e868?crop=entropy&amp;cs=tinysrgb&amp;fit=max&amp;fm=jpg&amp;ixid=M3wzMDAzMzh8MHwxfHNlYXJjaHwyMnx8ZGlnaXRhbCUyMGV2b2x1dGlvbnxlbnwwfHx8fDE3MjQ0NTU0MjN8MA&amp;ixlib=rb-4.0.3&amp;q=80&amp;w=1080&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:2802,&quot;width&quot;:4195,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:null,&quot;alt&quot;:&quot;worm's eye-view photography of ceiling&quot;,&quot;title&quot;:null,&quot;type&quot;:&quot;image/jpg&quot;,&quot;href&quot;:null,&quot;belowTheFold&quot;:false,&quot;topImage&quot;:true,&quot;internalRedirect&quot;:null,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="worm's eye-view photography of ceiling" title="worm's eye-view photography of ceiling" srcset="https://images.unsplash.com/photo-1488229297570-58520851e868?crop=entropy&amp;cs=tinysrgb&amp;fit=max&amp;fm=jpg&amp;ixid=M3wzMDAzMzh8MHwxfHNlYXJjaHwyMnx8ZGlnaXRhbCUyMGV2b2x1dGlvbnxlbnwwfHx8fDE3MjQ0NTU0MjN8MA&amp;ixlib=rb-4.0.3&amp;q=80&amp;w=1080 424w, https://images.unsplash.com/photo-1488229297570-58520851e868?crop=entropy&amp;cs=tinysrgb&amp;fit=max&amp;fm=jpg&amp;ixid=M3wzMDAzMzh8MHwxfHNlYXJjaHwyMnx8ZGlnaXRhbCUyMGV2b2x1dGlvbnxlbnwwfHx8fDE3MjQ0NTU0MjN8MA&amp;ixlib=rb-4.0.3&amp;q=80&amp;w=1080 848w, https://images.unsplash.com/photo-1488229297570-58520851e868?crop=entropy&amp;cs=tinysrgb&amp;fit=max&amp;fm=jpg&amp;ixid=M3wzMDAzMzh8MHwxfHNlYXJjaHwyMnx8ZGlnaXRhbCUyMGV2b2x1dGlvbnxlbnwwfHx8fDE3MjQ0NTU0MjN8MA&amp;ixlib=rb-4.0.3&amp;q=80&amp;w=1080 1272w, https://images.unsplash.com/photo-1488229297570-58520851e868?crop=entropy&amp;cs=tinysrgb&amp;fit=max&amp;fm=jpg&amp;ixid=M3wzMDAzMzh8MHwxfHNlYXJjaHwyMnx8ZGlnaXRhbCUyMGV2b2x1dGlvbnxlbnwwfHx8fDE3MjQ0NTU0MjN8MA&amp;ixlib=rb-4.0.3&amp;q=80&amp;w=1080 1456w" sizes="100vw" fetchpriority="high"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a><figcaption class="image-caption">Photo by <a href="true">Joshua Sortino</a> on <a href="https://unsplash.com">Unsplash</a></figcaption></figure></div><p>Cybersecurity may be relatively new domain but the speed of evolution is close to the speed of light. Looking back at the past 18 months I note that the rise of artificial intelligence (AI) represents a watershed moment to our profession, offering both unparalleled opportunities and formidable challenges. As detailed in recent publications by Microsoft<a class="footnote-anchor" data-component-name="FootnoteAnchorToDOM" id="footnote-anchor-1" href="#footnote-1" target="_self">1</a> and OpenAI<a class="footnote-anchor" data-component-name="FootnoteAnchorToDOM" id="footnote-anchor-2" href="#footnote-2" target="_self">2</a>, the dual-edged nature of AI technology is reshaping the cybersecurity landscape, necessitating a nuanced understanding among senior professionals, especially within the banking sector. I wanted to try to distil some of the key insights from these thought-leaders and reflect on the strategic roadmap for harnessing AI's potential while safeguarding against its perils.</p><h3><strong>The Paradox of AI in Cybersecurity</strong></h3><div class="subscription-widget-wrap-editor" data-attrs="{&quot;url&quot;:&quot;https://www.tekk-talk.com/subscribe?&quot;,&quot;text&quot;:&quot;Subscribe&quot;,&quot;language&quot;:&quot;en&quot;}" data-component-name="SubscribeWidgetToDOM"><div class="subscription-widget show-subscribe"><div class="preamble"><p class="cta-caption">Thanks for reading Dennis&#8217;s Substack! Subscribe for free to receive new posts and support my work.</p></div><form class="subscription-widget-subscribe"><input type="email" class="email-input" name="email" placeholder="Type your email&#8230;" tabindex="-1"><input type="submit" class="button primary" value="Subscribe"><div class="fake-input-wrapper"><div class="fake-input"></div><div class="fake-button"></div></div></form></div></div><p>At its core, AI promises to revolutionize cybersecurity practices through automation, enhancing threat detection, incident response, and system resilience. Cybersecurity&#8217;s foray into leveraging AI in both active and passive defences has already yielded significant advancements, protecting billions of cloud-based transactions daily. Similarly, OpenAI leads the field together with the broader commercial AI community with a commitment to safe AI utilization, underscoring the technology's potential to improve lives while acknowledging the risks of misuse.</p><p>However, this technological boon is not without its shadow. The same tools designed to fortify our defences can be, and will be, repurposed by adversaries, introducing a new arsenal for cybercriminals and state-affiliated threat actors. These malevolent entities, as both Microsoft and OpenAI report, are increasingly experimenting with AI to refine their attack strategies, posing sophisticated threats to global digital security.</p><h3><strong>A Call for Collaborative Vigilance</strong></h3><p>A recurring theme in the industry today is the imperative for collaborative vigilance. The intersection of AI and cybersecurity is not a battleground for lone warriors; it demands a united front. Intelligence sharing and strategic partnerships are crucial in identifying and neutralizing threats posed by AI-augmented cyber operations. The concerted efforts of these industry giants in disrupting state-affiliated malicious actors exemplify the power of collaboration in safeguarding the digital ecosystem.</p><h3><strong>Strategic Imperatives for Cybersecurity Leadership</strong></h3><p>For senior cybersecurity professionals, particularly in the banking sector where the stakes are exceedingly high, these insights translate into several strategic imperatives Here I will mention but a few:</p><ol><li><p><strong>Anticipate and Mitigate AI-Driven Threats:</strong> Cyber professionals must stay ahead of the curve, recognizing the potential for AI to be weaponized. This entails not only defending against traditional cyber threats but also preparing for AI-enabled social engineering attacks and other novel vulnerabilities. Inevitably this will mean that the time to exploit and progress through the attack chain is going to shorten, meaning that we have less time to respond and therefore our IT architecture must be designed with security at its core and security tooling must have built-in considerations for &#8220;continuous AI security improvement. In the shorter term this also means that existing vulnerability management process will need to be reviewed for agility, as many of existing SLAs used in control operations and risk management will need to be tightened, both in terms of risk classification and risk appetite but also in terms of speed of remediation.</p></li><li><p><strong>Prioritize AI Security and Ethical Development:</strong> Deploying AI in cybersecurity operations requires a security-first approach. This includes continuous refinement of AI models, implementing robust defences against manipulation, and ensuring ethical AI utilization across all operations. From an engineering perspective it is paramount that safeguards are built in to protect against misuse, ensuring transparency in terms of modelling and data use, and fostering a culture of ethical AI within the engineering community.</p></li><li><p><strong>Bolster the Workforce with AI:</strong> In the face of a global talent shortage in cybersecurity, AI emerges as a critical ally. Leveraging AI to augment human capabilities can bridge part of that gap, enhancing the efficiency and effectiveness of cybersecurity teams. But this also brings its own caveats; overreliance on technology and not understanding the limitations and constraints that the data models and frameworks inherently possess. It is imperative that the senior leadership understands where to automate and leverage AI, and not get caught-up in hypothetical efficiency gains and aspirational cost savings but focus on the areas in the value chain where net marginal benefits can be realised.</p></li><li><p><strong>Navigate Regulatory Landscapes:</strong> As AI becomes increasingly integral to cybersecurity operations, adhering to regulatory requirements and establishing clear governance around AI use are essential. This ensures compliance, maintains trust, and fosters a responsible technological environment. Regulations like EU&#8217;s DORA (Digital Operational Resilience Act) will require compliance around the use of AI, and resilience for internal IT operations also extends to Critical Third-Party Providers (CTPP) where the same criteria must be safeguarded. We see increasingly lower levels of tolerance from regulators on what may be perceived as weaknesses in resilience, meaning that we need to adopt a proactive approach to AI technology adoption where regulatory impact is considered upfront.</p></li></ol><h3><strong>Embracing the AI Era with Prudence</strong></h3><p>The insights offered by Microsoft and OpenAI illuminate the path forward for cybersecurity in the AI era. For senior professionals, especially within the sensitive confines of the banking sector, the message is clear: embracing AI's transformative potential is imperative, but so is guarding against its misuse. By fostering a culture of ethical AI use, prioritizing security in AI development, and championing collaborative efforts, we can harness the benefits of AI while mitigating its risks. The future of cybersecurity lies in striking this delicate balance, ensuring a safer digital world for all.</p><div><hr></div><div class="subscription-widget-wrap-editor" data-attrs="{&quot;url&quot;:&quot;https://www.tekk-talk.com/subscribe?&quot;,&quot;text&quot;:&quot;Subscribe&quot;,&quot;language&quot;:&quot;en&quot;}" data-component-name="SubscribeWidgetToDOM"><div class="subscription-widget show-subscribe"><div class="preamble"><p class="cta-caption">Thanks for reading Dennis&#8217;s Substack! Subscribe for free to receive new posts and support my work.</p></div><form class="subscription-widget-subscribe"><input type="email" class="email-input" name="email" placeholder="Type your email&#8230;" tabindex="-1"><input type="submit" class="button primary" value="Subscribe"><div class="fake-input-wrapper"><div class="fake-input"></div><div class="fake-button"></div></div></form></div></div><div class="footnote" data-component-name="FootnoteToDOM"><a id="footnote-1" href="#footnote-anchor-1" class="footnote-number" contenteditable="false" target="_self">1</a><div class="footnote-content"><p><a href="https://www.microsoft.com/en-us/security/business/security-insider/reports/cyber-signals/cyber-signals-issue-6-navigating-cyberthreats-and-strengthening-defenses/">https://www.microsoft.com/en-us/security/business/security-insider/reports/cyber-signals/cyber-signals-issue-6-navigating-cyberthreats-and-strengthening-defenses/</a> (Feb 14, 2024)</p><p></p></div></div><div class="footnote" data-component-name="FootnoteToDOM"><a id="footnote-2" href="#footnote-anchor-2" class="footnote-number" contenteditable="false" target="_self">2</a><div class="footnote-content"><p><a href="https://openai.com/blog/disrupting-malicious-uses-of-ai-by-state-affiliated-threat-actors">https://openai.com/blog/disrupting-malicious-uses-of-ai-by-state-affiliated-threat-actors</a> (Feb 14, 2024)</p><p></p></div></div>]]></content:encoded></item><item><title><![CDATA[The Digital Identity Conundrum in the Multi-Cloud Era]]></title><description><![CDATA[Picture this: In a major corporation, a security incident sets off alarm bells. The cause? A seemingly routine personnel change. An employee with privileged access to sensitive data in Google Cloud Platform through their Google Workspace account was reassigned to an Azure project. Because the company's Azure environment trusted the same Active Directory credentials, the employee inadvertently retained similar privileged access rights across platforms. This oversight exposed a critical gap in cross-platform identity management&#8212;where access controls from one cloud environment failed to properly translate to another.]]></description><link>https://www.tekk-talk.com/p/the-digital-identity-conundrum-in</link><guid isPermaLink="false">https://www.tekk-talk.com/p/the-digital-identity-conundrum-in</guid><dc:creator><![CDATA[Dennis Lindwall]]></dc:creator><pubDate>Mon, 21 Aug 2023 15:05:38 GMT</pubDate><enclosure url="https://substackcdn.com/image/fetch/f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F2e17af89-a98b-4ad1-8920-2ca0861eccab_1152x640.jpeg" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p></p><div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!DjJk!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F2e17af89-a98b-4ad1-8920-2ca0861eccab_1152x640.jpeg" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!DjJk!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F2e17af89-a98b-4ad1-8920-2ca0861eccab_1152x640.jpeg 424w, https://substackcdn.com/image/fetch/$s_!DjJk!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F2e17af89-a98b-4ad1-8920-2ca0861eccab_1152x640.jpeg 848w, https://substackcdn.com/image/fetch/$s_!DjJk!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F2e17af89-a98b-4ad1-8920-2ca0861eccab_1152x640.jpeg 1272w, https://substackcdn.com/image/fetch/$s_!DjJk!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F2e17af89-a98b-4ad1-8920-2ca0861eccab_1152x640.jpeg 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!DjJk!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F2e17af89-a98b-4ad1-8920-2ca0861eccab_1152x640.jpeg" width="1152" height="640" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/2e17af89-a98b-4ad1-8920-2ca0861eccab_1152x640.jpeg&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:&quot;normal&quot;,&quot;height&quot;:640,&quot;width&quot;:1152,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:null,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:null,&quot;href&quot;:null,&quot;belowTheFold&quot;:false,&quot;topImage&quot;:true,&quot;internalRedirect&quot;:null,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!DjJk!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F2e17af89-a98b-4ad1-8920-2ca0861eccab_1152x640.jpeg 424w, https://substackcdn.com/image/fetch/$s_!DjJk!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F2e17af89-a98b-4ad1-8920-2ca0861eccab_1152x640.jpeg 848w, https://substackcdn.com/image/fetch/$s_!DjJk!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F2e17af89-a98b-4ad1-8920-2ca0861eccab_1152x640.jpeg 1272w, https://substackcdn.com/image/fetch/$s_!DjJk!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F2e17af89-a98b-4ad1-8920-2ca0861eccab_1152x640.jpeg 1456w" sizes="100vw" fetchpriority="high"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a><figcaption class="image-caption">digital identity</figcaption></figure></div><p>Picture this: In a major corporation, a security incident sets off alarm bells. The cause? A seemingly routine personnel change. An employee with privileged access to sensitive data in Google Cloud Platform through their Google Workspace account was reassigned to an Azure project. Because the company's Azure environment trusted the same Active Directory credentials, the employee inadvertently retained similar privileged access rights across platforms. This oversight exposed a critical gap in cross-platform identity management; where access controls from one cloud environment failed to properly translate to another.</p><p>This routine personnel transfer exposed a critical gap in the company's Access Management processes. When the employee moved teams, their system access should have been reviewed and adjusted, but wasn't - a classic "mover" scenario that fell through the cracks. As a result, they unknowingly gained access to sensitive Azure datasets unrelated to their new role. When the employee accessed this data, it triggered automated breach detection systems, launching a full security investigation. The incident revealed a subtle but dangerous vulnerability where two separate identity management systems - GCP and Azure - operated on different assumptions about user privileges. In retrospect, the risk seems obvious, but it had remained hidden in the complexity of cross-platform access controls.</p><div class="subscription-widget-wrap-editor" data-attrs="{&quot;url&quot;:&quot;https://www.tekk-talk.com/subscribe?&quot;,&quot;text&quot;:&quot;Subscribe&quot;,&quot;language&quot;:&quot;en&quot;}" data-component-name="SubscribeWidgetToDOM"><div class="subscription-widget show-subscribe"><div class="preamble"><p class="cta-caption"></p></div><form class="subscription-widget-subscribe"><input type="email" class="email-input" name="email" placeholder="Type your email&#8230;" tabindex="-1"><input type="submit" class="button primary" value="Subscribe"><div class="fake-input-wrapper"><div class="fake-input"></div><div class="fake-button"></div></div></form></div></div><h3><strong>Historical Homogeneity Meets Cloud Complexity</strong></h3><p>In the days of on-premises infrastructure, Identity and Access Management (IAM) followed more straightforward rules. Organizations could establish clear boundaries around roles, permissions, and access within their controlled environments. But as businesses expanded into multiple cloud platforms like Google Cloud, Azure, and AWS, access control became exponentially more complex. This complexity has led to a troubling gap; while permissions multiply across platforms, organizations' understanding of the potential consequences of excessive access rights has diminished. The interconnected nature of these environments creates security blind spots that simply didn't exist in traditional systems.</p><p>Consider how a single employee's digital identity fragments across cloud platforms. In Google Cloud Platform (GCP), they're recognized by their Gmail account, reflecting GCP's integration with Google Workspace. Switch to Azure, and their identity transforms, now tied to Active Directory credentials. Move to AWS, and yet another identity representation emerges. As organizations struggle to unify these disconnected identity systems into a cohesive framework, they face critical questions: How can identity be managed consistently across platforms? And what constitutes privileged access when permissions don't translate cleanly between environments?</p><h3><strong>Reflections on Identity in the Cloud</strong></h3><ol><li><p><strong>Managing Fragmented Identities:</strong> How can organizations maintain consistent identity controls when employees have different credentials across cloud platforms? What technical and policy solutions can bridge these disconnected identity systems?</p></li><li><p><strong>Harmonizing Policy Frameworks:</strong> Each cloud platform has its own approach to identity management - GCP with Google Workspace, Azure with Active Directory, and others with their unique systems. What strategies can create consistent security policies that work across all these environments?</p></li><li><p><strong>Standardizing Privileged Access:</strong> The definition of "privileged access" varies significantly between platforms, creating security blind spots. How can organizations develop a unified framework for identifying and controlling high-risk permissions across their entire cloud ecosystem?</p></li><li><p><strong>Simplifying Privilege Management:</strong> With the multiplication of identities comes a proliferation of access rights to monitor. What approaches can help security teams maintain comprehensive visibility without creating unsustainable complexity in their privilege access management (PAM) systems?</p></li></ol><h4><strong>Implications for Tomorrow's Enterprises</strong></h4><p>The challenges of identity management across multiple cloud platforms extend far beyond technical complexity; they represent serious business risks that can impact security, compliance, and reputation.</p><p>Consider the consequences of inconsistent IAM policies: When access controls don't align across platforms, security gaps emerge. These gaps can lead to data breaches that expose sensitive information, triggering regulatory penalties and eroding stakeholder confidence. The financial impact can be immediate and severe - from regulatory fines to declining stock values as market trust deteriorates.</p><p>Even well-designed identity and privilege management systems can become problematic in today's environment. Processes that functioned effectively in traditional on-premises environments often become inadequate when extended to multi-cloud architectures. What was once a robust implementation for a homogeneous IT landscape now creates operational friction in a hybrid world. This evolution-driven mismatch can prevent employees from accessing resources they legitimately need, causing project delays and hindering incident response during critical situations. Conversely, it may also create excessive permissions across platforms, increasing vulnerability to insider threats and providing attack paths for external threat actors who can exploit compromised credentials to move laterally through systems. </p><h3><strong>Towards a Secure Digital Future</strong></h3><p>The multi-cloud era has arrived, bringing both unprecedented opportunities and complex security challenges. As organizations expand their digital identity frameworks across diverse platforms, they must balance cloud agility with robust security controls.</p><p>Organizations need to conduct thorough assessments of their existing identity and privilege management systems. This evaluation should specifically identify disconnects between traditional on-premises identity approaches and cloud-native requirements. Understanding these gaps is the first step toward developing an integrated identity strategy that works consistently across all environments.</p><p>This isn't a one-time effort. As cloud technologies evolve and the threat landscape shifts, regular reviews of identity and access controls become essential. Organizations should implement routine audits of their IAM and PAM implementations, focusing on cross-platform permission mapping. Alongside technical controls, comprehensive training programs help ensure employees understand the security implications of multi-cloud identity management and their role in maintaining secure access practices.</p><h3><strong>In Conclusion: A Gaze Towards Tomorrow</strong></h3><p>Cloud technologies offer transformative benefits in scalability, flexibility, and innovation. However, these advantages come with complex identity management challenges that organizations cannot afford to overlook. As businesses expand across multiple cloud platforms, a comprehensive identity strategy becomes not just beneficial but essential.</p><p>Success in this environment requires more than technical solutions. Organizations must develop a security-focused culture where identity management is understood as a critical business function rather than just an IT concern. This means establishing clear governance models, embracing automation where possible, and ensuring consistent enforcement of access policies across all environments.</p><p>Is your organization prepared to manage identity effectively across your increasingly complex cloud landscape? Those who approach this challenge strategically - with regular assessment, continuous improvement, and cross-platform visibility - will be best positioned to harness the cloud's benefits while maintaining robust security controls.</p><div class="subscription-widget-wrap-editor" data-attrs="{&quot;url&quot;:&quot;https://www.tekk-talk.com/subscribe?&quot;,&quot;text&quot;:&quot;Subscribe&quot;,&quot;language&quot;:&quot;en&quot;}" data-component-name="SubscribeWidgetToDOM"><div class="subscription-widget show-subscribe"><div class="preamble"><p class="cta-caption">Thanks for reading Dennis&#8217;s Substack! Subscribe for free to receive new posts and support my work.</p></div><form class="subscription-widget-subscribe"><input type="email" class="email-input" name="email" placeholder="Type your email&#8230;" tabindex="-1"><input type="submit" class="button primary" value="Subscribe"><div class="fake-input-wrapper"><div class="fake-input"></div><div class="fake-button"></div></div></form></div></div>]]></content:encoded></item></channel></rss>