AI Regulation & Compliance: Mapping the Global Landscape
In the opening article of this series, I highlighted how AI is transforming cybersecurity at unprecedented speed. Beyond its impact on tools and tactics, a critical new battlefield has emerged: regulation and compliance. For financial services professionals, this represents yet more regulatory frameworks to monitor, while organizations in traditionally unregulated sectors face GDPR-like challenges. Security teams that once concentrated solely on technical defences must now navigate an evolving maze of global rules - spanning model risk, data protection, and product liability - regardless of whether AI powers customer-facing chatbots or backend decision support systems.
This second instalment examines the diverse regulatory approaches emerging across major global jurisdictions. From the EU's comprehensive framework to the US's sectoral patchwork, from China's state-directed control to the varied models across Asia, cybersecurity teams face a complex tapestry of rules that vary dramatically by region. Understanding these different regulatory philosophies is the first step toward developing effective compliance strategies for AI-powered security tools.
The stakes are clear: inadequate compliance risks penalties and reputational damage, while treating regulation as a mere checklist exercise prevents organizations from capturing AI's full benefits. By mapping the global regulatory landscape, we can better understand the challenges and opportunities that lie ahead for cybersecurity teams navigating this new frontier.
Global Regulatory Frameworks
EU AI Act in Action
The EU AI Act1, perhaps the most comprehensive piece of legislation in this space, introduces a new level of scrutiny for cybersecurity AI tools, classifying many as “high-risk”. This means strict documentation, oversight, and human review requirements - a direct challenge to AI-driven, real-time security models. For instance, organizations must conduct detailed audits of training data, model performance, and real-world outcomes - an obligation that can clash with the “black box” nature of many sophisticated AI systems.
Specifically, the Act mandates human oversight at critical junctures of the model lifecycle. This requirement can be particularly challenging when dealing with anomaly detection or threat-hunting models that rely on autonomous, real-time responses, and becomes even more challenging when models are developed by third-parties or bought as SaaS solutions. As a result, security and risk teams must build in review processes, often slowing down workflows designed for speed.
The EU AI Act establishes a clear line in the sand with its "Unacceptable Risk" category - AI models/systems deemed so risky they're outright prohibited2. These systems are considered fundamentally incompatible with EU values and human rights protections.
This "prohibited" category includes AI systems designed for social scoring by governments, biometric identification systems used for real-time surveillance in public spaces (with limited exceptions for law enforcement), emotion recognition in workplaces or educational settings, and systems that exploit vulnerabilities of specific groups or use subliminal manipulation techniques.
Far from being theoretical concerns, I've observed numerous real-world examples of ethically dubious and outright wrong AI solutions in commercial test and development settings. In recent months, I encountered an early stage pilot project of an clearly "well-intentioned" AI solution that would land squarely in the EU's "unacceptable risk" category. This early stage pilot project involved a solution that performed live facial recognition via security cameras and used AI to analyse the customers' facial expressions as they entered the store to provide helpful alerts to staff about who might "need additional assistance."
While the developers framed this as a customer service enhancement, examples like this represent precisely the kind of technology the EU AI Act aims to restrict due to their potential for privacy violations and manipulation. Needless to say, this project got shut down by the AI oversight board but the case highlights how easily the ethical and legal boundaries are crossed and why robust AI governance structures are needed in the organisation. The regulations themselves are there to prevent the interpretation of the rules in the wrong way, especially by those who seek out the “grey areas” of ethics because that’s where they have identified profitable markets.
For cybersecurity professionals, this establishes important boundaries. While offensive security tools often involve techniques that could potentially cross into prohibited territory - particularly those leveraging behavioural analysis, vulnerability exploitation, or manipulative social engineering - they must now be designed with these restrictions in mind. The prohibition on exploiting vulnerabilities of specific groups is particularly relevant for red teams and penetration testers who must ensure their AI-enhanced tools don't disproportionately target or exploit protected characteristics or vulnerable populations, even when simulating sophisticated threat actors who might do exactly that. This creates a unique challenge: cybersecurity tools must be sophisticated enough to counter AI-driven threats yet constrained by legal and ethical boundaries. Organizations developing next-generation security platforms must now incorporate these restrictions into their design philosophy from the ground up, rather than as compliance afterthoughts.
In financial services, the EU AI Act creates significant ambiguity around AI-driven credit decisioning. Where exactly is the boundary drawn between legitimate credit risk assessment - a core banking function - and prohibited "social scoring"? Consider an AI system that dynamically adjusts credit worthiness based on spending patterns, payment timing, and transaction locations. While these factors have long been part of traditional credit models, when an AI system makes real-time decisions incorporating behavioural data from multiple sources, it begins to resemble the kind of comprehensive behavioural scoring that the Act restricts. Financial institutions must now carefully examine whether their advanced credit AI systems remain within the "legitimate business purpose" exception or potentially cross into prohibited territory—particularly when these systems incorporate non-traditional data points or create feedback loops that might disproportionately impact certain customer segments.
Fragmentation in the United States: A Sectoral Patchwork
Unlike the EU's comprehensive AI governance framework, the United States has pursued a fragmented, sector-specific regulatory strategy that lacks federal cohesion. This approach has created an increasingly complex compliance environment, particularly for organizations developing and deploying AI-powered cybersecurity solutions.
The regulatory landscape has become even more uncertain following the January 2025 rescission of Biden administration’s executive order 14110 ("Safe, Secure, and Trustworthy Artificial Intelligence"). This EO had laid out broad AI governance principles at the federal level but didn’t align with the objectives and priorities of the Trump administration; this administration prioritizes deregulation and market-led AI development. This reversal reflects a fundamental philosophical policy shift - a belief that regulation inherently stifles innovation and undermines U.S. technological leadership. While the new executive order mandates an Artificial Intelligence action plan to be developed in 2025, industry scepticism about meaningful regulatory guidance from this administration remains high. Some prominent AI researchers, including those from major tech companies, have expressed concern that regulatory uncertainty, rather than deregulation itself, may ultimately hinder U.S. AI advancement3.
In the absence of a coherent federal strategy, states have begun establishing their own AI governance frameworks, creating a pattern reminiscent of how data privacy regulation evolved in the U.S. Just as CCPA and the NY SHIELD Act created de facto national privacy standards, emerging state AI regulations are establishing a patchwork of compliance requirements.
This implies that states like California and New York are likely to lead that way for a national AI roadmap. The currently proposed and enacted legislation in both states are addressing AI safety assessments, public sector AI use, deepfake protections, and algorithmic discrimination.
Another point to note is that these state-level initiatives vary significantly in scope and approach - from California's attempt at comprehensive AI model oversight to New York's targeted focus on government AI applications, so it is likely some form of regulatory cohesion between states will emerge.
Finally, it’s worth noting that as more states develop their own frameworks the compliance landscape may grow increasingly complex for companies with operations across multiple states. At least until either federal regulations or national frameworks and developed to guide AI implementation and oversight with some form of alignment between state legislation4.
Even within the federal regulatory sphere, the lack of overarching AI guidance has resulted in inconsistent standards across industries. This sectoral fragmentation creates particular challenges for cybersecurity teams, as AI systems deemed compliant in one industry context may require substantial modification for deployment in another. Organizations operating across multiple sectors face the added burden of reconciling these inconsistent requirements.
Solutions for this could potentially be accelerated though organisations like NIST who sets the standards for both government entities and private sectors but in light of the current administration’s mindset, this is yet to be firmed up.
The fragmented regulatory environment creates several operational challenges for security professionals deploying AI-powered tools which echoes the concerns from AI Industry experts that some form of federal guidance if not regulation is needed to reduce:
Strategic Uncertainty: Security architects must design systems flexible enough to adapt to rapidly evolving and unpredictable regulatory requirements
Compliance Overhead: Organizations must monitor and interpret regulations across multiple states and sectors, diverting resources from security improvements
Innovation Constraints: More stringent state regulations may create barriers to adopting advanced AI security capabilities in certain jurisdictions
Competitive Implications: Organizations operating primarily in less regulated states may gain security advantages through more agile AI deployment
While the current trajectory points toward continued regulatory fragmentation, market pressure for consistency may eventually drive greater alignment—potentially even with international standards. In the meantime, organizations must develop adaptive compliance strategies that can navigate this complex landscape while maintaining effective security postures.
The fundamental question remains whether the U.S. approach of limited federal oversight will ultimately help or hinder AI adoption in cybersecurity, and evolution of AI globally. While it may accelerate innovation in some contexts, it also creates risks of unregulated AI deployment - including the very issues of model bias, hallucination, and unethical AI applications that more comprehensive frameworks like the EU AI Act explicitly address.
China: AI as a Strategic Asset
China's regulatory framework for AI is deeply intertwined with its broader national security and geopolitical strategy. The country's “Generative AI Measures” law5, introduced in 2023, form a key part of its broader AI governance, reinforcing state priorities while fostering AI development. Unlike Western models, where AI governance is often framed in terms of ethics and risk mitigation, China's regulations emphasize state control, national security, and social stability.
The Chinese approach reflects a fundamentally different view of technology governance - one where AI is positioned as both an economic driver and a tool for social management. This perspective is evident in the 2021 "Ethical Norms for the New Generation Artificial Intelligence" (Ethical Norms)6 and subsequent regulations, which balance technological advancement with political alignment. For cybersecurity professionals, this creates a distinct regulatory environment that differs markedly from Western frameworks. Similarities with EU and US in areas such as data privacy are there in the Chinese “Personal Information Protection Law” (PIPL) - a federal data privacy law targeted at personal information protection and addressing the problems with personal data leakage and has clear implications for automated decision-making technologies.
Some of the key regulatory aspects include mandatory algorithm registration, where companies developing AI models must register their algorithms with the Cyberspace Administration of China (CAC). This ensures that AI technologies align with state objectives and remain accessible for government oversight, which presents its own challenge to some western countries. Content security and model accountability are also critical components, as AI-driven cybersecurity tools must undergo rigorous content security assessments. Companies providing AI-based services are held accountable for the content their models generate and must take corrective actions if outputs violate state regulations. Additionally, AI security tools operating in China must comply with strict data localization laws, ensuring that sensitive data remains within national borders and is accessible for state review or investigations. Finally, China employs a tiered AI deployment and state influence approach, where AI applications in cybersecurity and other critical sectors receive state backing but are subject to heightened regulatory scrutiny, ensuring that they serve national security priorities.
The “Generative AI Measures” reflect China's dual mandate of promoting AI development while maintaining centralized oversight. These regulations require AI developers to not only prevent the generation of content that violates political, social, or moral guidelines but also to maintain transparency to ensure government oversight and to conduct security assessments for AI models that have public opinion attributes or social mobilization capabilities. This specifically impacts threat intelligence platforms and security monitoring tools that might analyse social media or public communications.
China's regulatory philosophy extends beyond individual laws to encompass its broader "New Generation Artificial Intelligence Development Plan," which aims to make China the global leader in AI by 2030. DeepSeek emerging to rival US GPT LLMs is an example of this strategy. This strategic initiative aligns AI development with national priorities through coordinated investment, talent development, and regulatory frameworks. For cybersecurity applications, this translates to preferential treatment for tools that enhance critical infrastructure protection, support state security objectives, and integrate with China's national cybersecurity strategy.
For multinational cybersecurity companies, the regulatory landscape creates significant operational challenges. Foreign firms must establish separate Chinese entities with localized data storage, undergo security reviews for cross-border data transfers, and potentially modify core algorithms to comply with registration requirements. These hurdles have led many international cybersecurity vendors to partner with Chinese firms rather than operate independently within the market.
While these regulations create challenges for foreign firms, they provide Chinese companies with a structured and predictable regulatory environment. Domestic AI leaders such as Baidu, Alibaba, and Tencent benefit from state guidance that shapes investment and research priorities, reinforcing China's AI leadership. In the cybersecurity domain specifically, companies like Qi An Xin, 360 Security, and Sangfor have flourished by developing AI-powered security solutions that align with both market demands and regulatory expectations.
Singapore: Pragmatic AI Governance within a Regulatory Framework
Singapore has established itself as a leader in AI governance through a sophisticated balance of innovation support and regulatory oversight. Unlike other global approaches, Singapore's model represents a distinctive third way that merits closer examination, particularly for its implications in the cybersecurity sector.
Singapore employs a pragmatic, business-friendly approach to AI regulation that differs significantly from frameworks like the EU AI Act. Rather than implementing standalone AI legislation, Singapore embeds AI governance within existing legal frameworks, creating a comprehensive but flexible regulatory environment. This distinctive approach allows the city-state to maintain regulatory oversight while fostering AI innovation.
At the core of Singapore's regulatory architecture lies the AI Governance Framework developed by the Infocomm Media Development Authority (IMDA)7. This framework establishes foundational principles for transparency, accountability, and risk management without imposing rigid compliance requirements. Instead, AI governance is effectively implemented through existing legislation like the Personal Data Protection Act, which regulates AI systems processing personal data, and the Cybersecurity Act of 2018, which mandates security standards for critical infrastructure including AI systems.
This regulatory foundation is strengthened by sector-specific guidelines that address unique concerns in high-risk domains. In financial services, the Monetary Authority of Singapore has introduced the FEAT and Veritas Frameworks, establishing clear standards for AI fairness and transparency in financial decision-making8. Similarly, the Ministry of Health's AI in Healthcare Guidelines ensure the safe deployment of medical AI applications, balancing innovation with patient safety9.
What truly distinguishes Singapore's approach; however, is its emphasis on industry-led initiatives that complement formal regulation. Programs like AI Verify, a national testing and certification program, establish de facto standards for AI security applications without requiring legislative mandates. The AI Verify Foundation further enhances this ecosystem by developing open-source testing frameworks and ethical guidelines through collaborative industry participation. These initiatives are aligned with Singapore's National AI Strategy 2.0, which articulates a long-term vision for responsible AI integration across public and private sectors.
For cybersecurity applications specifically, Singapore's regulatory approach creates several distinct advantages. First, by leveraging existing regulatory frameworks rather than introducing entirely new compliance regimes, cybersecurity companies benefit from operational clarity and reduced regulatory uncertainty. Second, the absence of prescriptive AI-specific legislation allows for more adaptive cybersecurity solutions that can evolve with emerging threats. Third, certification programs like AI Verify create market differentiation opportunities for secure, ethical AI tools, helping companies build trust with customers. Finally, this balanced approach positions Singapore as an attractive hub for cybersecurity AI development, enhancing its international competitiveness.
Singapore's model represents a thoughtful middle path between the EU's comprehensive regulation and US fragmented approach. By focusing on sectoral compliance through existing legal structures, promoting voluntary best practices backed by certification programs, and maintaining high standards in critical domains while preserving flexibility elsewhere, Singapore has created an environment where AI-powered cybersecurity can thrive while maintaining public trust and ethical standards.
This nuanced approach demonstrates that effective AI governance need not come at the expense of innovation although one should not be lulled into false sense of believing there is a “laissez-faire” AI landscape in Singapore because there are firm regulatory frameworks that surround the seemingly voluntary AI Framework. The reflection I add here is that as other jurisdictions continue developing their AI regulatory frameworks, Singapore's model offers valuable lessons in achieving balance between oversight and growth, particularly for sensitive applications like cybersecurity where both innovation and trust are essential.
Japan's Human-Centric Approach to AI Governance
Japan has emerged as a distinctive voice in the global AI regulatory landscape, advocating for what it terms a "human-centric" approach to artificial intelligence. Unlike the comprehensive legislative frameworks emerging in the EU or the sector-specific regulations in the United States, Japan has deliberately chosen a path that emphasizes ethical principles and industry self-regulation while avoiding overly prescriptive legal mandates - i.e. a comprehensive and holistic approach. This approach reflects a consistent pattern in Japan's regulatory philosophy, echoing its response to financial governance challenges following the Sarbanes-Oxley Act in the United States. Where the U.S. implemented detailed procedural financial reporting requirements through SOX, Japan developed Naibutosei (内部統制) – internal control regulatory framework that addressed the broader operational processes that ultimately contribute to and supports financial reporting rather than simply prescribing specific finance control processes. This same holistic, principles-based approach now distinguishes Japan's AI governance framework.
At the core of Japan's approach are the "Social Principles of Human-Centric AI," developed by the Cabinet Office's Council for Social Principles of Human-Centric AI. These principles establish an ethical foundation cantered on human dignity, diversity and inclusion, and sustainability. Rather than creating binding legislation, Japan has focused on developing these principles as a framework that can guide both public and private sector AI development without stifling innovation through rigid compliance requirements.
This preference for flexible governance is further reflected in Japan's AI Strategy 202210, which prioritizes "AI for humanity" while simultaneously positioning Japan as a global leader in AI innovation. The strategy emphasizes three core pillars: human resource development, industrial competitiveness, and a sustainable society enabled by AI. Notably, the strategy contains minimal references to restrictive regulations, instead focusing on enablement and responsible development.
Japan's regulatory approach relies heavily on industry self-regulation and co-regulation models. The Japan Business Federation (Keidanren) has developed its own AI ethics guidelines that align with the government's social principles but provide industry-specific implementations11. This collaborative approach between government and industry creates a dynamic regulatory environment where best practices can evolve alongside technological advancements without awaiting legislative changes.
Another distinctive feature of Japan's framework is its holistic, cross-sectoral approach to AI governance. Unlike the United States, where AI regulation tends to fragment along existing agency jurisdictions, Japan applies consistent principles across different industries while allowing for contextual adaptation. This approach reduces regulatory complexity for companies developing AI solutions that span multiple sectors - a particular advantage for cybersecurity applications that often need to operate across domain boundaries.
For cybersecurity specifically, Japan's model creates several beneficial conditions. AI-powered security tools operate under broader information security laws rather than AI-specific constraints, allowing for greater adaptability in responding to emerging threats. The Ministry of Economy, Trade and Industry (METI) has issued guidelines for AI security that emphasize risk management and transparency without mandating specific technical approaches. Meanwhile, critical infrastructure protection incorporates AI security considerations through Japan's cybersecurity strategy rather than through separate AI legislation.
The implications for international security cooperation are significant as well. Japan has actively engaged in international AI governance forums, including the Global Partnership on AI and OECD AI initiatives, advocating for interoperable standards that facilitate cross-border security collaboration. This approach aligns with Japan's broader diplomatic strategy of promoting "Data Free Flow with Trust" (DFFT), which seeks to balance data protection with the free flow of information necessary for effective global cybersecurity.
While Japan's approach shares some similarities with Singapore's pragmatic model, it places even greater emphasis on ethical principles and less on formal regulatory structures. This distinction reflects Japan's cultural preference for consensus-building and social harmony over strict legal enforcement. However, both approaches contrast sharply with China's highly interventionist AI regulations, which impose significant state oversight and control over AI applications, particularly those related to national security.
As Japan continues to refine its AI governance framework, it maintains a careful balance between encouraging innovation in critical areas like cybersecurity while ensuring ethical considerations remain central to AI development. This human-centric approach positions Japan as an important counterpoint in global AI governance discussions, demonstrating that effective oversight need not rely primarily on prescriptive regulation. For cybersecurity applications especially, this flexible, principles-based approach may prove particularly valuable in addressing rapidly evolving threats without regulatory constraints that could impede responsive innovation.
South Korea's Balanced Approach to AI Governance: Innovation with Oversight
South Korea has established a distinctive approach to AI governance through its recently enacted 'Act on the Development of AI and Establishment of Trust' (AI Basic Act), passed in December 202412. South Korea's approach reveals a nuanced framework that balances regulatory oversight with strong support for innovation - a model that differs significantly from both the EU's restrictive stance and Singapore's light(er)-touch approach.
The AI Basic Act consolidates 19 different regulatory proposals into a cohesive framework that prioritizes national competitiveness alongside responsible AI development. Unlike the EU AI Act, which focuses primarily on risk mitigation through extensive pre-market obligations, South Korea's law integrates regulatory governance with industrial growth strategy and emphasizes post-market oversight. This represents a fundamental difference in philosophy: where the EU sees regulation primarily as a means to control risks, South Korea views it as one component of a broader strategy to promote AI advancement.
A defining characteristic of South Korea's approach is its scope and application. The AI Basic Act applies to developers and entities offering AI products and services, but unlike the EU's framework, it does not extend to users of AI systems. This distinction significantly reduces the regulatory burden across the AI ecosystem and reflects a targeted approach to oversight. The law also avoids the controversial aspects of regulating general-purpose AI systems that have caused considerable debate in other jurisdictions.
South Korea's framework introduces the concept of "high-impact AI" - systems that may affect human life, physical safety, and fundamental rights in specific sectors such as energy, healthcare, transportation, and education. Not unlike the human considerations that goes into Japan’s model. And in contrast to the EU's mandatory conformity assessments for high-risk systems, the Korean law states that providers of high-impact AI should "endeavour to obtain inspection and certification in advance." This creates a more flexible framework that encourages rather than mandates specific certification processes.
The institutional architecture supporting the AI Basic Act further demonstrates South Korea's balanced approach. The law establishes several new bodies which are tasked not only with regulatory oversight but also with developing R&D strategies, investment frameworks, and international cooperation initiatives. The National AI Committee, which is one of the oversight bodies, explicitly includes competitiveness enhancement among its core responsibilities, highlighting how economic considerations are integrated directly into the governance structure.
For cybersecurity applications specifically, this regulatory environment creates opportunities rather than constraints. AI-powered security tools are subject to the general provisions of the AI Basic Act, but the emphasis on post-market oversight allows for greater flexibility in development and deployment compared to more prescriptive regulatory regimes. The law encourages transparency and risk management without imposing rigid technical requirements that might impede innovation in this rapidly evolving domain.
Perhaps most tellingly, the AI Basic Act mandates a regular review of its provisions and continuous benchmarking against international standards. This built-in adaptability reflects South Korea's recognition of the rapidly evolving nature of AI technology and governance practices. Unlike more rigid regulatory frameworks, the Korean approach allows for ongoing refinement as the technology matures and its implications become clearer.
South Korea has created a governance model that addresses legitimate concerns about AI risks while maintaining the flexibility needed for continued technological advancement. For the global AI governance landscape, South Korea's approach offers an important middle path—one that recognizes the need for oversight without presuming that extensive pre-market regulation is the only way to ensure responsible AI development.
A Global Tapestry of Approaches
The journey through AI security regulations across Asia presents unique challenges for global cybersecurity teams. Unlike the EU's uniform regulations or the US's fragmented state-level approach, Asia's regulatory landscape requires tailored compliance strategies for each jurisdiction.
What are some of the key take-aways from this?
Data Sovereignty Conflicts: AI cybersecurity solutions that comply with Singaporean or Japanese standards may require fundamental redesigns for deployment in mainland China due to strict data localization laws and algorithm registration requirements.
Ambiguous Chinese Regulations: Unlike the EU's clearly defined prohibited AI categories, China's regulatory framework includes deliberately vague provisions that allow authorities flexibility in enforcement, creating uncertainty for foreign firms.
Diverging Security Architectures: Security AI systems may need separate regional models to comply with local regulations, increasing operational complexity and technical debt while potentially fragmenting threat intelligence capabilities.
Compliance vs. Security Trade-Offs: Strict compliance in South Korea or China may limit the deployment of certain AI-powered cybersecurity tools, potentially leaving gaps in regional security postures compared to operations in less restrictive markets.
The global AI regulatory landscape presents a complex mosaic of approaches reflecting diverse national priorities and governance philosophies. From the EU's comprehensive risk-based framework to the US's fragmented sectoral model, from China's state-directed control to the varied strategies across Singapore, Japan, and South Korea, organizations face a challenging compliance environment that varies dramatically by region.
These regulatory differences aren't merely administrative hurdles - they fundamentally shape how AI-powered cybersecurity tools can be developed, deployed, and operated across borders. The tension between innovation and control, between security imperatives and compliance requirements, creates profound strategic considerations for organizations operating globally. Practically for multinational companies, this means careful consideration for developing, using or hosting AI solution in different countries or regions. Over time, these frameworks will likely converge but for the immediate future, careful technology strategies must be devised that factor in these regulatory differences.
In the next article, we'll examine the some of the practical implications of these diverse regulatory frameworks for cybersecurity teams. How do these varying approaches affect security architectures and risk management? What trade-offs must security leaders make when balancing compliance with effective threat detection? And perhaps most importantly, how can organizations transform compliance challenges into strategic advantages?
I welcome your thoughts on how these emerging regulatory frameworks are affecting your organization's approach to AI in cybersecurity. What challenges are you facing, and which regulatory model seems most conducive to effective security operations?
EU AI Act, European Commission - https://digital-strategy.ec.europa.eu/en/policies/regulatory-framework-ai
https://artificialintelligenceact.eu/high-level-summary/
https://www.businessinsider.com/yann-lecun-meta-trump-academia-witch-hunt-musk-ai-2025
https://www.multistate.ai/updates/vol-52
Translated document by Georgetown University CSET https://cset.georgetown.edu/wp-content/uploads/t0400_AI_ethical_norms_EN.pdf
https://www.pdpc.gov.sg/-/media/Files/PDPC/PDF-Files/Resource-for-Organisation/AI/SGModelAIGovFramework2.pdf
https://www.mas.gov.sg/~/media/MAS/News%20and%20Publications/Monographs%20and%20Information%20Papers/FEAT%20Principles%20Final.pdf
https://isomer-user-content.by.gov.sg/3/9c0db09d-104c-48af-87c9-17e01695c67c/1-0-artificial-in-healthcare-guidelines-(aihgle)_publishedoct21.pdf
https://www8.cao.go.jp/cstp/ai/aistrategy2022_honbun.pdf – in English: https://www8.cao.go.jp/cstp/ai/aistratagy2022en.pdf
https://www.keidanren.or.jp/en/policy/2023/041.html
https://ecipe.org/blog/koreas-new-ai-law-not-brussels-progeny