Securing Artificial Intelligence in the Quantum Computing Era
Artificial intelligence (AI) systems are drastically evolving mission-critical applications in healthcare, finance, defense, and infrastructure. The majority of these applications, the fundamental building block of communication security, is based on classical cryptography. However, with the advancement of quantum computing, these classical‑cryptography‑based systems are under severe risk from many different attacks. In this blog post, we examine the intersection of AI security and post-quantum cryptography, highlighting how post-quantum cryptographic techniques can be used to protect AI systems against emerging quantum-enabled threats [1].
Shor’s algorithm [2] can successfully factor large integers, which is the basis of many contemporary cryptographic algorithms, threatening to break widely adopted public‑key cryptographic systems like RSA, ECC, and DH. Grover’s algorithm [3] effectively demonstrates the reduction of brute‑force attack complexity on symmetric cryptographic schemes [4]. Thus, we have sufficient evidence to understand the quantum threat against the confidentiality, integrity, and authentication of the data and models that AI systems depend on. In this light, Post‑Quantum Cryptography (PQC) has emerged as a crucial area of research, designing cryptographic algorithms that are secure against quantum computing threats.
This quantum threat has a direct influence on the current AI systems, which heavily depends on classical cryptography for:
-
Safeguard the integrity of AI models and intellectual property
-
Maintain secure communication pathways between distributed components (Eg, used in federated learning)
-
Preserve the confidentiality of both training datasets and inference outputs.
-
Verify the legitimacy of users and devices interacting with AI systems.
If these security mechanisms are attacked, the system could be a direct target of data breaches, model theft, manipulation of AI outputs, and systemic failures in critical infrastructures. Thus, it is essential to transition into modern, post-quantum safe cryptographic techniques to combat the quantum adversaries.
Role of Contemporary Cryptographic Algorithms in Securing AI
Contemporary cryptographic systems are the backbone of secure AI workflows, supporting essential functions such as data confidentiality, secure communication, digital signature verification, and system integrity. Widely used algorithms such as RSA, Elliptic Curve Cryptography (ECC), and Diffie-Hellman (DH) are used to protect sensitive model assets and communications. Symmetric key crypto schemes such as AES, are utilised in securing training data, model parameters, and inference outputs in distributed or cloud environments. Decentralised machine learning approaches like federated learning also depend heavily on cryptographic protocols to achieve privacy and authenticity of model updates. As the base of classical cryptography lies on hard mathematical problems, such as integer factorization and discrete logarithms, they become vulnerable to quantum algorithms, which put the modern AI systems at significant risk as quantum computing capabilities grow.
The Quantum Threat to Contemporary Cryptographic Systems
Quantum computers leverage qubits that can exist in superposition and entangled states, enabling them to perform certain computations far more efficiently than classical machines. This capability poses a fundamental threat to modern cryptography through two well-known quantum algorithms. Shor’s Algorithm, introduced in 1994, can factor large integers and solve discrete logarithm problems in polynomial time, effectively breaking widely deployed public-key schemes such as RSA, Diffie-Hellman, and elliptic-curve cryptography. Grover’s Algorithm, proposed in 1996, offers a quadratic speed-up for brute-force search, reducing the effective security of symmetric-key algorithms like AES and cryptographic hash functions such as SHA-2. As quantum hardware continues to advance from theory toward practical realization, the cryptographic foundations that secure today’s AI systems face increasing risk, highlighting the urgency of transitioning to quantum-resistant solutions.
Post-Quantum Cryptography (PQC) Families
Post-quantum cryptographic algorithms are designed to be resilient against adversaries who have quantum computing capabilities or the regular classical attacker. Major PQS families are as follows:
-
Lattice-Based Cryptography: Constructs security on the worst-case hardness of lattice problems such as Learning With Errors (LWE), Ring-LWE, and Module-LWE, providing strong reductions from average-case to worst-case instances [5]. These schemes support efficient key exchange, encryption, and digital signatures, and currently form the basis of most standardized post-quantum algorithms.
-
Code-Based Cryptography: Derives security from the computational intractability of decoding general linear error-correcting codes, a problem believed to remain hard even for quantum adversaries [6]. Code-based schemes are well-studied, offer long-term security confidence, but typically incur large public key sizes.
-
Multivariate Polynomial Cryptography: Bases security on the NP-hard problem of solving systems of multivariate quadratic equations over finite fields. These schemes enable efficient signature generation and verification but require careful parameter selection due to past structural cryptanalysis [7].
-
Hash-Based Signatures: Achieves security solely from the collision and preimage resistance of cryptographic hash functions. Hash-based schemes provide provable security and minimal assumptions, making them highly robust, though often at the cost of larger signatures or state management requirements.
-
Isogeny-Based Cryptography: Utilizes the presumed hardness of computing isogenies between supersingular elliptic curves [8]. These schemes offer comparatively small key sizes and strong mathematical foundations but are computationally expensive and have faced recent cryptanalytic challenges.
Modern Quantum Threats to AI System Security
As quantum computing capabilities advance, AI systems face security risks that extend well beyond traditional concerns of data confidentiality. Many core AI workflows rely on cryptographic protections for integrity, authenticity, and privacy, all of which may be weakened by quantum-enabled attacks. In this context, several AI-specific threat vectors become particularly critical:
-
Model Extraction and Integrity Violations: The compromise of public-key cryptography could enable adversaries to steal proprietary machine learning models or inject malicious modifications by bypassing digital signature and integrity verification mechanisms.
-
Quantum-Accelerated Adversarial Machine Learning: Quantum-enhanced optimization and search techniques may increase the efficiency of generating adversarial examples, model inversion attacks, or surrogate models, thereby lowering the cost and time required to mount effective attacks against AI systems.
-
Compromise of Secure Inference and Federated Learning: Privacy-preserving AI paradigms such as secure multiparty computation, homomorphic encryption, and federated learning depend on strong cryptographic assumptions. These guarantees may degrade under quantum attacks, exposing sensitive model updates or inference inputs.
-
Data Privacy and Pipeline Manipulation: AI pipelines involve large-scale data exchange across distributed components. If encryption and authentication mechanisms are broken, adversaries may intercept, manipulate, or replay data, undermining both privacy and model correctness.
Attack Surfaces in AI Systems in Presence of Quantum Computing
The advent of quantum computing is expected not only to weaken conventional cryptographic primitives but also to significantly enhance the effectiveness of existing attack strategies targeting AI systems. By reducing the computational cost of search, optimization, and cryptanalysis, quantum-enabled adversaries can amplify several well-known AI attack vectors:
-
Quantum-Enhanced Adversarial Machine Learning: Quantum-assisted optimization and search techniques may accelerate the generation of adversarial inputs and the training of surrogate models, enabling faster and more efficient evasion or manipulation of AI decision-making processes.
-
Data Inference and Privacy Leakage Attacks: Once cryptographic protections are compromised, attackers can exploit AI model outputs to infer sensitive information about training data, posing severe privacy risks [9] in data-intensive domains such as healthcare, finance, smart grid [10] and critical infrastructure.
-
Model Inversion and Extraction Attacks: The exposure of encrypted communication channels allows adversaries to issue carefully crafted queries to deployed models, facilitating the reconstruction of internal model parameters, architectures, or proprietary decision logic [11].
-
Backdoor and Poisoning Attacks in Collaborative AI: In distributed learning environments, including federated and edge-based AI systems, the breakdown of secure communication and update authentication can enable attackers to inject poisoned model updates or embed hidden backdoors into collaboratively trained models [12].
NIST has already initiated the process of standardisation of the post-quantum cryptography algorithms [13] while, many governments and global organizations are taking steps to adopt PQC. Mainstream service applications such as Chrome [14] and Signal [15] have already implemented new hybrid post-quantum protocols to avoid the potential for “store now, decrypt later” attacks, where adversaries can collect encrypted data now with the goal of decrypting them later when the post-quantum technology is established [16]. Thus, existing conventional cryptography-based algorithms are deprecating and eventually will be disallowed as part of the transition to PQC, it is crucial to introduce PQC replacements. In domains such as autonomous transportation and national defense, where data confidentiality and model integrity must be preserved for years or decades, the delayed impact of quantum decryption represents a critical vulnerability. As a result, transitioning to quantum-resistant cryptography is not merely a future consideration but an immediate requirement for protecting AI systems.
Several real-world deployments illustrate the tangible risks of unprepared AI infrastructure in a post-quantum landscape. In smart healthcare environments, quantum-enabled attacks could expose patient data, compromise diagnostic models, or manipulate telemedicine workflows. Autonomous vehicles depend heavily on AI-driven perception and decision-making; weakened authentication or encrypted sensor data could allow attackers to spoof inputs or interfere with control systems. Similarly, AI systems used in national security contexts, such as for surveillance, threat analysis, and strategic planning, present high-value targets where intercepted or altered information could have severe consequences. These emerging scenarios highlight the urgent need to integrate post-quantum security measures into AI ecosystems before quantum threats become operational realities.
Implementation Challenges in Deploying PQC for AI Systems
Despite its strong security guarantees, the integration of post-quantum cryptography into AI ecosystems presents several practical and engineering challenges. AI systems often operate under strict performance, latency, and resource constraints, and the introduction of PQC can affect system efficiency and interoperability. Moreover, many AI deployments rely on mature cryptographic stacks and hardware platforms that were not designed with post-quantum algorithms in mind, making seamless adoption non-trivial. Addressing these challenges requires careful system-level design and, in many cases, transitional deployment strategies.
-
Computational and Performance Overhead: Many PQC schemes require larger key sizes, signatures, and ciphertexts, which can increase computation time, communication overhead, and memory usage. These factors are particularly impactful for latency-sensitive AI workloads and real-time inference.
-
Interoperability with Existing Systems: Current AI frameworks and security infrastructures are largely built around classical cryptography. Integrating PQC often necessitates hybrid cryptographic approaches to maintain backward compatibility during the transition period.
-
Resource Limitations on Edge and Embedded Devices: AI applications deployed on resource-constrained hardware (IoT sensors, mobile devices, and embedded controllers) can face difficulties supporting the computational and energy demands of certain post-quantum algorithms.
-
Immature Tooling and Standardized Interfaces: The lack of mature, standardized APIs and middleware for PQC complicates integration with widely used AI frameworks such as TensorFlow and PyTorch. Abstracting cryptographic complexity while preserving security and performance remains an open engineering challenge.
As quantum computing moves closer to practical deployment, it introduces fundamental security challenges for modern AI systems. The cryptographic mechanisms that currently protect AI data, models, and communications are increasingly at risk from quantum-enabled attacks, threatening long-term confidentiality, integrity, and trust. Because of that, Post-quantum cryptography emerges not as a theoretical safeguard, but as a necessary foundation for future-proof AI security. Thus its crucial to understand the importance of integrating PQC into AI workflows and how quantum-resistant primitives can be effectively deployed to protect sensitive data and critical AI assets.
At the same time, transitioning to quantum-safe AI systems presents non-trivial technical and organizational challenges, including performance overheads, deployment complexity, and evolving standardization efforts. Addressing these challenges requires coordinated advances in lightweight cryptographic design, hybrid migration strategies, system-aware architectures, and rigorous threat modeling, supported by strong collaboration across cryptography, AI engineering, and policy communities. Looking forward, sustained research, standardization, and education will be essential to ensure a smooth and secure transition. Ultimately, proactive adoption of PQC will be crucial in enabling resilient, trustworthy, and privacy-preserving AI systems capable of addressing the security demands of the quantum era.
Edited By: Windhya Rankothge, PhD, Canadian Institute for Cybersecurity
References
[1] S. Darzi and A. A. Yavuz, “Pqc meets ml or ai: Exploring the synergy of machine learning and post-quantum cryptography,” Authorea Preprints, 2024. (https://salehdarzi.com/PQCMeetsML.pdf)
[2] P. W. Shor, “Algorithms for quantum computation: discrete logarithms and factoring,” in Proceedings 35th annual symposium on foundations of computer science. Ieee, 1994, pp. 124–134. (https://ieeexplore.ieee.org/document/365700)
[3] L. K. Grover, “A fast quantum mechanical algorithm for database search,” in Proceedings of the twenty-eighth annual ACM symposium on Theory of computing, 1996, pp. 212–219. (https://dl.acm.org/doi/pdf/10.1145/237814.237866)
[4] A. A. H. Elnour, “Integrating post quantum cryptography (pqc) for end-to-end security in edge and iot environments,” IEEE Transactions on Consumer Electronics, pp. 1–1, 2026. (https://ieeexplore.ieee.org/document/11348939)
[5] C. Peikert, “A decade of lattice cryptography,” Foundations and Trends ˆW in Theoretical Computer Science, vol. 10, no. 4, pp. 283–424, 2016. (https://eprint.iacr.org/2015/939.pdf)
[6] R. Overbeck and N. Sendrier, “Code-based cryptography,” in Post-quantum cryptography. Springer, 2009, pp. 95–145. (https://link.springer.com/chapter/10.1007/978-3-540-88702-7_4)
[7] J. Ding and D. Schmidt, “Rainbow, a new multivariable polynomial signature scheme,” in International conference on applied cryptography and network security. Springer, 2005, pp. 164–175. (https://link.springer.com/chapter/10.1007/11496137_12)
[8] L. De Feo, “Mathematics of isogeny based cryptography,” arXiv preprint arXiv:1711.04062, 2017. (https://arxiv.org/abs/1711.04062)
[9] R. Shokri, M. Stronati, C. Song, and V. Shmatikov, “Membership inference attacks against machine learning models,” in 2017 IEEE symposium on security and privacy (SP). IEEE, 2017, pp. 3–18. (https://ieeexplore.ieee.org/document/7958568)
[10] H. M. S. Badar, S. Ahmed, N. I. Kajla, G. Fan, and C. Zhang, “Q-blaise: Quantum-resilient blockchain and aienhanced security protocol for smart grid iot,” IEEE Transactions on Consumer Electronics, vol. 71, no. 2, pp. 4959–4971, 2025. (https://ieeexplore.ieee.org/document/11006113)
[11] F. Tram`er, F. Zhang, A. Juels, M. K. Reiter, and T. Ristenpart, “Stealing machine learning models via prediction{APIs},” in 25th USENIX security symposium (USENIX Security 16), 2016, pp. 601–618. (https://www.usenix.org/conference/usenixsecurity16/technical-sessions/presentation/tramer )
[12] E. Bagdasaryan, A. Veit, Y. Hua, D. Estrin, and V. Shmatikov, “How to backdoor federated learning,” inInternational conference on artificial intelligence and statistics. PMLR, 2020, pp. 2938–2948. (https://proceedings.mlr.press/v108/bagdasaryan20a.html)
[13] D. Moody, R. Perlner, A. Regenscheid, A. Robinson, and D. Cooper, “Transition to post-quantum cryptography standards,” National Institute of Standards and Technology, Tech. Rep., 2024. (https://nvlpubs.nist.gov/nistpubs/ir/2024/NIST.IR.8547.pdf)
[14] D. O’Brien, “Protecting chrome traffic with hybrid kyber kem,” Chromium Blog, Aug. 2023, accessed: Aug. 17, 2025. [Online]. Available: https://blog.chromium.org/2023/08/protecting-chrome-traffic-with-hybrid.html (https://blog.chromium.org/2023/08/protecting-chrome-traffic-with-hybrid.html)
[15] K. Bhargavan, C. Jacomme, F. Kiefer, and R. Schmidt, “Formal verification of the {PQXDH}{Post-Quantum} key agreement protocol for end-to-end secure messaging,” in 33rd USENIX Security Symposium (USENIX Security 24), 2024, pp. 469–486. (https://www.usenix.org/conference/usenixsecurity24/presentation/bhargavan)
[16] G. Alagic, F. Bajaj, and A. Kocoglu, “The best of both kems: Securely combining kems in post-quantum hybrid schemes,” Cryptology ePrint Archive, 2025. (https://eprint.iacr.org/2025/010)