Strict AI safety controls needed now
Published 1:23 am Sunday, June 8, 2025
As we immerse ourselves daily in work, sports, politics, social media and personal relationships, we spend little time contemplating life-changing technologies…such as autonomous artificial intelligence. Oh, we may see headlines or hear something, but we are unlikely to take time from our entertainment culture for serious scrutiny and extrapolation.
Yet, every day our world becomes more connected through the internet and other information systems. For example, online systems control electricity generation and distribution across national and international grids. Communication systems for social media, defense communications, data storage, etc. span the globe and orbit above it.
To help manage these ever more complex systems, Amazon, Microsoft and other tech giants are investing heavily in the development of artificial intelligence and related infrastructure including networks.
We tend to see AI as complex computer systems that can do tasks, complex and menial, faster and more dependably than humans. And we tend to see significant AI risks as just movie plots. We are even getting to play (and write papers) with some fundamental AI versions like Copilot and ChatGPT.
So, did you happen to catch the recent NBC News article, “How far will AI go to defend its own survival? Recent safety tests show some AI models are capable of sabotaging commands or even resorting to blackmail to avoid being turned off or replaced.”
Angela Lang writes, “a will to survive” has been exhibited in several potentially autonomous artificial intelligence models. “Recent tests by independent researchers, as well as one major AI developer, have shown that several advanced AI models will act to ensure their self-preservation when they are confronted with the prospect of their own demise — even if it takes sabotaging shutdown commands, blackmailing engineers or copying themselves to external servers without permission.”
Hello HAL.
Responding to decades of concerns raised by scientists, in 2023 the National Institute for Standards and Technology published its AI Risk Management Framework and encouraged developers to “voluntarily” adopt it.
Bipartisan groups in the U.S House and Senate recently began working on legislation to regulate AI development. However, both seem more interested in the economic benefits of AI expansion than systemic risks from uncontrolled AI. The “One Big Beautiful Bill” passed by the House would prevent states from adopting their own risk regulations.
If we follow our usual trajectory (as we did with the internet), the public will not demand action until some dire catastrophe occurs. But this is one case where strict “do no harm” regulations should be implemented before that can happen.
Crawford is the author of “A Republican’s Lament: Mississippi Needs Good Government Conservatives.”