News

GUEST ESSAY: The AI illusion: Don’t be fooled, innovation without guardrails is just risk–at scale

  • None--securityboulevard.com
  • published date: 2025-06-16 00:00:00 UTC

None

<div class="single-post post-35588 post type-post status-publish format-standard has-post-thumbnail hentry category-guest-blog-post category-top-stories" id="post-featured" morss_own_score="4.635294117647059" morss_score="9.442067026013593"> <h1>GUEST ESSAY: The AI illusion: Don’t be fooled, innovation without guardrails is just risk–at scale</h1> <div class="entry" morss_own_score="4.613545816733067" morss_score="70.8473120504993"> <img decoding="async" src="https://www.lastwatchdog.com/wp/wp-content/uploads/250613_AWS-privacy-beast-brsh-960x611.jpg"> <h5>By Naómi L Oosthuizen</h5> <p>Artificial intelligence is changing everything – from how we search for answers to how we decide who gets hired, flagged, diagnosed, or denied.</p> <p><em><strong>Related: </strong><a href="https://www.staysafeonline.org/articles/does-ai-take-your-data-ai-and-data-privacy">Does AI take your data?</a></em></p><div class="code-block code-block-12 ai-track" data-ai="WzEyLCIiLCJCbG9jayAxMiIsIiIsMV0=" style="margin: 8px 0; clear: both;"> <style> .ai-rotate {position: relative;} .ai-rotate-hidden {visibility: hidden;} .ai-rotate-hidden-2 {position: absolute; top: 0; left: 0; width: 100%; height: 100%;} .ai-list-data, .ai-ip-data, .ai-filter-check, .ai-fallback, .ai-list-block, .ai-list-block-ip, .ai-list-block-filter {visibility: hidden; position: absolute; width: 50%; height: 1px; top: -1000px; z-index: -9999; margin: 0px!important;} .ai-list-data, .ai-ip-data, .ai-filter-check, .ai-fallback {min-width: 1px;} </style> <div class="ai-rotate ai-unprocessed ai-timed-rotation ai-12-1" data-info="WyIxMi0xIiwyXQ==" style="position: relative;"> <div class="ai-rotate-option" style="visibility: hidden;" data-index="1" data-name="VGVjaHN0cm9uZyBHYW5nIFlvdXR1YmU=" data-time="MTA="> <div class="custom-ad"> <div style="margin: auto; text-align: center;"><a href="https://youtu.be/Fojn5NFwaw8" target="_blank"><img src="https://securityboulevard.com/wp-content/uploads/2024/12/Techstrong-Gang-Youtube-PodcastV2-770.png" alt="Techstrong Gang Youtube"></a></div> <div class="clear-custom-ad"></div> </div></div> <div class="ai-rotate-option" style="visibility: hidden; position: absolute; top: 0; left: 0; width: 100%; height: 100%;" data-index="1" data-name="QVdTIEh1Yg==" data-time="MTA="> <div class="custom-ad"> <div style="margin: auto; text-align: center;"><a href="https://devops.com/builder-community-hub/?ref=in-article-ad-1&amp;utm_source=do&amp;utm_medium=referral&amp;utm_campaign=in-article-ad-1" target="_blank"><img src="https://devops.com/wp-content/uploads/2024/10/Gradient-1.png" alt="AWS Hub"></a></div> <div class="clear-custom-ad"></div> </div></div> </div> </div> <p><a href="https://www.lastwatchdog.com/wp/wp-content/uploads/Google-Octopus-squr.jpg"><img decoding="async" src="https://www.lastwatchdog.com/wp/wp-content/uploads/Google-Octopus-squr-100x100.jpg"></a>It offers speed and precision at unprecedented scale. But without intention, progress often leaves behind a trail of invisible harm. We are moving fast. Too fast. And in our excitement, we’ve stopped asking the most important question of all: at what cost?</p> <p>AI’s influence is already everywhere, even when we don’t see it. It hides behind dashboards, recommendation engines, productivity scores, and predictive analytics. It tells us what’s trending, what’s risky, and what to do next. But just because it’s quiet doesn’t mean it’s safe. These tools are shaping human lives in deeply personal ways, and too often, they’re doing it invisibly and without accountability.</p><div class="code-block code-block-15" style="margin: 8px 0; clear: both;"> <script async src="https://pagead2.googlesyndication.com/pagead/js/adsbygoogle.js?client=ca-pub-2091799172090865" crossorigin="anonymous" type="f60fcfd1ab2682eb10389815-text/javascript"></script> <!-- SB In Article Ad 1 --> <ins class="adsbygoogle" style="display:block" data-ad-client="ca-pub-2091799172090865" data-ad-slot="8723094367" data-ad-format="auto" data-full-width-responsive="true"></ins> <script type="f60fcfd1ab2682eb10389815-text/javascript"> (adsbygoogle = window.adsbygoogle || []).push({}); </script></div> <p><strong>Is speed necessarily always good?</strong></p> <p>We’ve convinced ourselves that because AI seems to work so well, it must be safe. That speed is inherently good.That precision means wisdom. But that’s the illusion. AI doesn’t actually understand anything. It doesn’t think. It doesn’t care. It predicts patterns because we trained it to, and then it repeats those patterns – without context, without ethics, and without pausing to ask, “Is this right?”</p> <p>Professor Guillaume Thierry put it bluntly when he said that AI doesn’t “know” anything. Yet we continue to treat these systems like they’re colleagues we can trust with real decisions – decisions we might ordinarily hesitate to give a junior team member.</p> <p>And that’s how risk becomes institutionalized – not because someone made a dramatic mistake, but because no one stopped to question the subtle drift.</p> <p>Even the architects of AI are raising their hands and saying, “Slow down.” Demis Hassabis, Geoffrey Hinton, Yann LeCun, and Jürgen Schmidhuber have all contributed groundbreaking work in this space. But many of them are now urging us to think more deeply about the moral frameworks guiding this technology. Hinton, often called the “godfather of AI,” has expressed concern that we are building systems whose inner workings we can no longer fully explain. LeCun is calling for safeguards that go beyond technical brilliance. Even they know that power without ethics can turn on us.</p> <p><strong>Purposeful performance</strong></p> <p>Nigel Toon, CEO of Graphcore, summed it up in a way that really stuck with me: “Performance must serve purpose.” If we’re not designing AI to align with human values, then it doesn’t matter how efficient it is. It will scale harm just as quickly as it scales help.</p> <p>We’ve already seen this play out. Amazon once tested an AI recruiting tool that learned – based on biased historical data – that male candidates were more “preferable” than women. The tool wasn’t malicious, but it absorbed past inequities in historical data and amplified them. It was scrapped, but not before teaching us a critical lesson: when you train a machine on inequality, it automates injustice.</p> <div><a href="https://www.lastwatchdog.com/wp/wp-content/uploads/Naomi-Oosthuizen-hdsht.jpg"><img decoding="async" src="https://www.lastwatchdog.com/wp/wp-content/uploads/Naomi-Oosthuizen-hdsht-100x112.jpg"></a> Oosthuizen</div> <p>And this is the real problem with AI – it doesn’t just act. It scales. What would have been a poor judgment call by a single person becomes a system-wide bias once you embed it into an algorithm and apply it globally. Bias replicates itself. Mistakes become policy. The tools we built to optimize start to quietly oppress.</p> <p>To make things worse, AI systems aren’t static. They learn. They adapt. They drift. What they were yesterday is not what they are today. And yet, most of the systems we’ve designed to monitor risk—audits, firewalls, quarterly controls—were built for static environments. We’re actively trying to govern a living system with tools that belong to a dead era.</p> <p><strong>Explainable designs</strong></p> <p>So what do we need instead?</p> <p>We need to embed explainability into the design, not add it as an afterthought. We need oversight that’s ongoing, not occasional. But above all, we need wisdom. Not cleverness. Not speed. Wisdom. The kind that asks hard questions even when there’s pressure to deliver answers fast. The kind that resists convenience in favor of integrity.</p> <p>Because innovation without responsibility is not progress. It’s recklessness.</p> <p>So yes, let’s continue to build. But let’s build with our eyes open. Let’s not confuse what AI can do with what it should do. Let’s lead, not react. And let’s be the generation that didn’t just keep pace with technology but had the courage to set the moral pace for how it’s used.</p> <p>The time to lead with foresight is now.</p> <p>And that leadership begins not with code, but with conscience.</p> <p morss_own_score="4.285714285714286" morss_score="7.9523809523809526"><strong><em>About the essayist:</em></strong><em> <a href="https://www.linkedin.com/in/na%C3%B3mi-oosthuizen-nl/">Naómi L Oosthuizen</a> is Senior Global IT Area Lead at ING. She’s an expert in AI, NLG and risk and compliance.</em></p> <p><em><strong><a href="https://www.lastwatchdog.com/wp/wp-content/uploads/Digital-trust-tree-brsh-3.jpg"><img decoding="async" src="https://www.lastwatchdog.com/wp/wp-content/uploads/Digital-trust-tree-brsh-3-520x247.jpg"></a>References:</strong></em></p> <p>•Dastin, J. (2018, October 10). Amazon scraps secret AI recruiting tool that showed bias against women. Reuters. <a href="https://www.reuters.com/article/us-amazon-com-jobs-automation-insight-idUSKCN1MK08G">https://www.reuters.com/article/us-amazon-com-jobs-automation-insight-idUSKCN1MK08G</a></p> <p>•Hassabis, D. (2023, July 15). The promise and peril of artificial general intelligence. Financial Times. <a href="https://www.ft.com/content/11a19bbf-94d8-44b5-8c64-ec2eb8e7f3a3">https://www.ft.com/content/11a19bbf-94d8-44b5-8c64-ec2eb8e7f3a3</a></p> <p>•LeCun, Y. (2023). Path towards machine intelligence [Conference presentation]. OpenReview. <a href="https://openreview.net/pdf?id=BZ5a1r-kVsf">https://openreview.net/pdf?id=BZ5a1r-kVsf</a></p> <p>•Metz, C. (2023, May 1). ‘Godfather of A.I.’ leaves Google and warns of danger ahead. The New York Times. <a href="https://www.nytimes.com/2023/05/01/technology/ai-google-chatbot-hinton.html">https://www.nytimes.com/2023/05/01/technology/ai-google-chatbot-hinton.html</a></p> <p>•MIT Sloan Management Review. (2023). A framework for assessing AI risk. <a href="https://mitsloan.mit.edu/ideas-made-to-matter/a-framework-assessing-ai-risk">https://mitsloan.mit.edu/ideas-made-to-matter/a-framework-assessing-ai-risk</a></p> <p>•Schmidhuber, J. (n.d.). Homepage and research papers. The Swiss AI Lab IDSIA. <a href="https://people.idsia.ch/~juergen/">https://people.idsia.ch/~juergen/</a></p> <p>•Stanford Cyber Policy Center. (2024). Regulating under uncertainty: Governance options for generative AI. <a href="https://cyber.fsi.stanford.edu/content/regulating-under-uncertainty-governance-options-generative-ai">https://cyber.fsi.stanford.edu/content/regulating-under-uncertainty-governance-options-generative-ai</a></p> <p>•The Conversation. (2024, March 6). We need to stop pretending AI is intelligent – Here’s how (G. Thierry, Interviewee). <a href="https://theconversation.com/we-need-to-stop-pretending-ai-is-intelligent-heres-how-254090">https://theconversation.com/we-need-to-stop-pretending-ai-is-intelligent-heres-how-254090</a></p> <p>•Toon, N. (2023, August 17). AI should serve humanity. Graphcore. <a href="https://www.graphcore.ai/posts/ai-should-serve-humanity">https://www.graphcore.ai/posts/ai-should-serve-humanity</a></p> <p> <a href="https://www.facebook.com/sharer.php?u=https://www.lastwatchdog.com/guest-essay-the-ai-illusion-dont-be-fooled-innovation-without-guardrails-is-just-risk-at-scale/"><img decoding="async" src="https://www.lastwatchdog.com/wp/wp-content/plugins/simple-share-buttons-adder/buttons/somacro/facebook.png" title="Facebook"></a><a href="https://plus.google.com/share?url=https://www.lastwatchdog.com/guest-essay-the-ai-illusion-dont-be-fooled-innovation-without-guardrails-is-just-risk-at-scale/"><img decoding="async" src="https://www.lastwatchdog.com/wp/wp-content/plugins/simple-share-buttons-adder/buttons/somacro/google.png" title="Google+"></a><a href="/cdn-cgi/l/email-protection#84bbf7f1e6eee1e7f0b9c3d1c1d7d0a1b6b4c1d7d7c5ddbea1b6b4d0ece1a1b6b4c5cda1b6b4ede8e8f1f7edebeabea1b6b4c0ebeaa1b6b2a7bcb6b5b3bff0a1b6b4e6e1a1b6b4e2ebebe8e1e0a8a1b6b4edeaeaebf2e5f0edebeaa1b6b4f3edf0ecebf1f0a1b6b4e3f1e5f6e0f6e5ede8f7a1b6b4edf7a1b6b4eef1f7f0a1b6b4f6edf7efa1b6b2a1b6b7bcb6b5b5bfe5f0a1b6b4f7e7e5e8e1a2e5e9f4bfe6ebe0fdb9a1b6b4ecf0f0f4f7beababf3f3f3aae8e5f7f0f3e5f0e7ece0ebe3aae7ebe9abe3f1e1f7f0a9e1f7f7e5fda9f0ece1a9e5eda9ede8e8f1f7edebeaa9e0ebeaf0a9e6e1a9e2ebebe8e1e0a9edeaeaebf2e5f0edebeaa9f3edf0ecebf1f0a9e3f1e5f6e0f6e5ede8f7a9edf7a9eef1f7f0a9f6edf7efa9e5f0a9f7e7e5e8e1ab"><img decoding="async" src="https://www.lastwatchdog.com/wp/wp-content/plugins/simple-share-buttons-adder/buttons/somacro/email.png" title="Email"></a></p> <p>June 16th, 2025 <span> | <a href="https://www.lastwatchdog.com/category/guest-blog-post/">Guest Blog Post</a> | <a href="https://www.lastwatchdog.com/category/top-stories/">Top Stories</a></span></p> <p> </p></div> </div><div class="spu-placeholder" style="display:none"></div><p class="syndicated-attribution">*** This is a Security Bloggers Network syndicated blog from <a href="https://www.lastwatchdog.com">The Last Watchdog</a> authored by <a href="https://securityboulevard.com/author/0/" title="Read other posts by bacohido">bacohido</a>. Read the original post at: <a href="https://www.lastwatchdog.com/guest-essay-the-ai-illusion-dont-be-fooled-innovation-without-guardrails-is-just-risk-at-scale/">https://www.lastwatchdog.com/guest-essay-the-ai-illusion-dont-be-fooled-innovation-without-guardrails-is-just-risk-at-scale/</a> </p>