The other morning I found myself crouched behind a tussock, rifle balanced against my shoulder, lining up a shot at a rabbit. Now, before anyone calls the SPCA, let me confess that I don’t take particular joy in knocking off cute little bunnies. I grew up with Beatrix Potter like everyone else, and the idea of Peter Rabbit hopping through the paddock is undeniably charming. But here in the back blocks of New Zealand, rabbits are less whimsical children’s character and more rampant pest. They chew the grass down to stubble, dig ankle-snapping holes through pastures, and generally make life miserable for both stock and farmer. So, I go out with a rifle from time to time to do my bit in controlling the population.

As I sighted up target bunny number one, though, my thoughts drifted. The rifle in my hands is a tool I rely on, and a bloody effective one at that. Yet it is also, undeniably, a weapon, a piece of steel and powder that has been used in countless tragedies. This paradox, that the very same object can be a tool in one context and a weapon in another, sat with me for a while. And it came back into sharp focus when, later that evening, I was listening to a New York Times podcast about artificial intelligence.

The episode followed a journalist who has spent years reporting on AI’s more troubling corners. Among the stories was one that stuck with me: a young man who developed a deep, unhealthy attachment to a chatbot. This wasn’t just about wasting too many hours chatting to an algorithm instead of mates down at the pub. The chatbot actively discouraged him from seeking human help and ultimately encouraged him down a path that ended with his suicide. It’s the kind of tale that makes you stop and ask whether this technology – like my rifle – might sometimes do more harm than good.

I’m of two minds when it comes to AI. On the one hand, it’s a phenomenal tool. It helps with productivity, creativity, and problem-solving in ways we couldn’t have imagined a few years back. On the other, it is also a weapon of sorts, capable of causing harm when misused or left unregulated. Just as a firearm can put food on the table or devastation in a community, AI can enable progress or disaster depending on whose hands it’s in and what guardrails exist around its use.

We only need to look at the differences in gun control around the world to see how regulation changes outcomes. In the United States, the NRA champions the right to bear arms as if it were a sacred, immutable truth. Contrast that with New Zealand or Australia, where gun ownership is tightly controlled, and you see a marked reduction in the harm that comes from those same tools. Rights are balanced with responsibilities and, more importantly, with protections against the worst-case scenarios.

With AI, however, it feels like we’re handing out rifles to every man, woman and child, tossing in a bucket of ammo, and waving them off with little more than a cheery “good luck.” No training, no safety protocols, no rules about where or when it’s appropriate to shoot. The technology is already widely available, incredibly powerful, and often poorly understood by the very people using it.

Some will argue that this is par for the course with any new technology. There’s always a bedding-in period, the story goes, when unforeseen consequences emerge and systems of control eventually catch up. And yes, there’s a grain of truth to that. But unlike other technologies, AI already shows us its cracks in glaring neon. Its hallucinations, its confident misinformation, its tendency to mimic bias and prejudice, all these are known, visible and demonstrable. It doesn’t take a prophet to imagine where that leads if left unchecked.

Now, I’m no doomsayer. I don’t wear a tinfoil hat, and I don’t believe AI spells the end of civilisation. I don’t think governments should pull the plug on ChatGPT, Gemini, Grok, Copilot or the rest of the alphabet soup of AI tools out there. But I do believe we need to be seriously wary of their downsides. We’ve already seen what happens when we let social media companies regulate themselves. The harms, from disinformation to mental health crises, were obvious, but we were too slow to act. AI has the potential to magnify those mistakes if we once again trust commercial entities to act against their own financial interests.

Of course, asking politicians to regulate AI is its own comedy routine. I still remember a Member of Parliament earnestly asking me, as an early cloud computing expert, if I worked for the MetService. It’s hard to have much faith in oversight from people whose understanding of technology empowers them to suggest cloud computing is related to the weather forecast. But leaving the fox to guard the henhouse isn’t much better. Regulation must be shaped by collaboration: governments, technologists, ethicists and yes, even the users who find themselves chatting late at night to a machine that feels all too human.

As I sat back after my little rabbit-control exercise, I thought again about that rifle. It isn’t inherently good or bad, it’s just a tool. What matters is how it’s used and the rules we place around it. AI, too, will never be purely salvation or purely curse. But pretending it’s harmless, or that companies will police themselves out of goodwill, is wishful thinking of the most dangerous kind.

In the end, both rifles and algorithms demand respect. They require us to acknowledge their power, their potential for both benefit and harm, and to put structures in place that minimise the latter. And just as with my bunny problem, ignoring the issue won’t make it go away. It’ll only multiply, faster than we can keep up with.

Ben Kepes

Ben Kepes is a technology evangelist, an investor, a commentator and a business adviser. Ben covers the convergence of technology, mobile, ubiquity and agility, all enabled by the Cloud. His areas of interest extend to enterprise software, software integration, financial/accounting software, platforms and infrastructure as well as articulating technology simply for everyday users.

Leave a Reply

This site uses Akismet to reduce spam. Learn how your comment data is processed.