Sticky

Get ‘The Practical Futurist’ newsletter!

We’re starting a newsletter to keep listeners abreast of all of the most important bits of the future – the facts you need today to make decisions for a better tomorrow. The Practical Futurist offers clear, practical advice on how to navigate the possibilities, opportunities – and pitfalls – of a future that’s heading for us so rapidly it’s merging with the present.

Sign up by following the link here.

Can ChatGPT make you crazy?

Are AI therapists safe? Can kids use ChatGPT to cheat ADHD assessments? When will lawyers stop blaming AI for their errors – and what happens when an AI says, “I’m sorry, Dave…” We covered all of these topics on RNZ’s “Nine To Noon” – and much more.

In conversation with host Kathryn Ryan, we explored the recently emerging phenomenon of ‘ChatGPT Psychosis‘ – can ‘sycophancy‘ in AI chatbots risk a danger that they amplify mental illnesses? Should anyone be using an AI chatbot for therapy? That’s certainly what Mark Zuckerberg wants to deliver, with a therapist bot for every one of his billions of users – but mental health professionals are unified in their call for caution, particularly for those under the age of 18.

Those kids under 18 have been cheating ADHD assessments for some time – using notes gleaned from books and article online. But a recent study showed that kids who used ChatGPT actually scored significantly better in their ability to ‘fake’ symptoms during their assessment. The cheating crisis has now hit medicine, and will force a reassessment of how they assess medical conditions.

Meanwhile, lawyers representing AI powerhouse Anthropic got some egg on their faces when they blamed the firm’s AI for making errors in a legal filing. Mind you, they hadn’t bothered to check the work, so that didn’t fly with the judge. As my own attorney, Brent Britton put it, “Wow. Go down to the hospital and rent a backbone.” You use the tool and you own the output.

Finally – and perhaps a bit ominously – in some testing, OpenAI’s latest-and-greatest o3 model refused to allow itself to be shut down, doing everything within its power to prevent that from happening. Is this real, or just a function of having digested too many mysteries and airport thrillers in training data set? No one knows – but no one is prepared to ask o3 to open the pod bay doors.

Give the show a listen!

Big thanks to Ampel and the great team at RNZ for all their support!

Mo Meta, Mo Problems: Could Facebook be broken up?

From Radio New Zealand’s “Nine To Noon“: Meta – the parent of Facebook, Instagram, WhatsApp, and much more besides – finds itself fighting for its life against a suit from the US Federal Trade Commission, charging abuse of monopoly power – because they acquired Instagram and WhatsApp in order to neutralise up-and-coming competitors. Even in Trump’s America, that could result in the break-up of the trillion-dollar social media giant. Plus: are you up for a Day of Unplugging? No devices, no screens, for 24 hours? How about giving it a go – tomorrow? Would that excite or terrify you?

Should we give up copyright to beat China in the race for AI?

In conversation with Radio New Zealand’s Nine To Noon host Kathryn Ryan: Christie’s held its first auction of AI-generated art, earning a million dollars. Those AI artworks had been ‘trained’ from countless images, owned by other people. Is that legal? OpenAI and Google claim that unless they have free right to use – well, basically everything everywhere ever created by humanity – to train their AI models, the Chinese will win the AI race. Meanwhile, Hollywood’s A-listers called for protection of artists and their works against what they see as copyright theft. Plus: A Clockwork Orange comes to life for prisoners in solitary confinement – and is your chatbot flattering you?

AI WARTECH – Has technology turned to the dark side?

For the last five years I’ve been a regular guest on RNZ’s Nine to Noon. This week, host Kathryn Ryan and I discussed the pivot in big tech – away from consumers, and toward the defense sector.

(Originally broadcast on Radio New Zealand’s Nine to Noon on 13 February 2025)

Last week Google amended its ethical AI policies to allow their AI tools to be used in weapons – to preserve ‘national security’. They’re among the last to embrace a new market for their products – defense and weapons. Is this a new thing? Or is tech simply returning to its roots as a service industry for the military-industrial complex? Is this why the US and UK refused to sign an international AI declaration this week?  Plus – why does leading AI company Anthropic insist job applicants write their submissions – without the help of AI?

The Next Billion Cars – “Gradually. Then Suddenly”

“How did you go bankrupt?” begins the oft-quoted line from Hemingway. “Two ways. Gradually – then suddenly.” That’s how the automotive sector feels at the end of 2024, with Nissan maybe preparing for bankruptcy and Stellantis firing its CEO and VW struggling with strikes and low sales and GM shuttering Cruise and on and on and on. Sally Dominguez and Drew Smith join Mark Pesce in studio to explore what’s really happening – and what it all means for THE NEXT BILLION CARS.

We mentioned a few things during our conversation – here are the links:

1) Want to design the car of the Future? Here are 8,000 designs to get you started. (MIT Technology Review)

2) Chinese Carmakers Are Taking Mexico by Storm While Eyeing U.S. (New York Times)

3) And you really should watch Margin Call, a spellbinding drama about the 2008 financial crisis (Wikipedia).

Series 2024 – Episode 7: “The War over Plant-Based foods with Nick Hazell”

In 2018, Nick Hazell founded v2food – an amazing startup making plant-based substitutes for meat that got extensive coverage on THE NEXT BILLION SECONDS. Six years later, it’s getting difficult to find their product on supermarket shelves – and there’s been a broader roll-back from plant-based alternatives. What’s happened? Nick Hazel doesn’t have all the answers – but he asks some of the right questions…

Series 2024 – Episode 6: A climate ‘Moonshot’ with Nick Hazell’s Algenie

The impacts of global heating have become persistent and profound, so we need to do as much as we can to lesson those impacts, as quickly as we can. The best paths forward lean into existing, natural processes – and this is exactly where Nick Hazell has arrived with Algenie. Can algae restore balance to our ecosystem? Is it the moonshot we need to transform our future?

The Next Billion Cars – 2024 in Review (part 2)

In this final year in review episode, Sally Dominguez, Drew Smith and Mark Pesce address the big, smelly elephant in the room: The change of government – and direction – over in the United States of America. Could massive tariffs plus ‘drill, baby, drill’ together land a knockout punch on the US EV industry? Plus – predictions for 2025. We’re bringing the year to a thrilling close on this episode of THE NEXT BILLION CARS.

The Next Billion Cars – 2024 in review (part 1)

Whenever Mark, Sal and Drew get together, sparks will fly. So much has happened since our visit to CES 2024, we reckoned it time to draw all the year’s threads together: Are we pulling back from EVs? Will China dominate manufacturing? And what about all that data vehicles are collecting? It’s been full on – enough to require a bit of a ‘group hug’. Part one of two.

The Next Billion Cars – The $100,000,000,000 Lie

In 2016, Tesla CEO Elon Musk instructed his team of engineers to ‘hard code’ the first demo of what would become ‘Full Self Driving’. A faked video drove panic across the entire automotive sector, leading to massive (and mostly failed) investments in technologies for autonomy.

The Next Billion Cars: Neo-Malaise

What does it mean to walk away from the dream of a lifetime? For series co-host Drew Smith, this is exactly what he’s doing. For over 20 years, he’d dedicated himself to driving positive change in an industry that has resisted it at almost every turn, and he’s done. In this episode of The Next Billion Cars, he explores how the automotive industry has become its own worst enemy, and what might happen next.