Can ChatGPT make you crazy?

Are AI therapists safe? Can kids use ChatGPT to cheat ADHD assessments? When will lawyers stop blaming AI for their errors – and what happens when an AI says, “I’m sorry, Dave…” We covered all of these topics on RNZ’s “Nine To Noon” – and much more.

In conversation with host Kathryn Ryan, we explored the recently emerging phenomenon of ‘ChatGPT Psychosis‘ – can ‘sycophancy‘ in AI chatbots risk a danger that they amplify mental illnesses? Should anyone be using an AI chatbot for therapy? That’s certainly what Mark Zuckerberg wants to deliver, with a therapist bot for every one of his billions of users – but mental health professionals are unified in their call for caution, particularly for those under the age of 18.

Those kids under 18 have been cheating ADHD assessments for some time – using notes gleaned from books and article online. But a recent study showed that kids who used ChatGPT actually scored significantly better in their ability to ‘fake’ symptoms during their assessment. The cheating crisis has now hit medicine, and will force a reassessment of how they assess medical conditions.

Meanwhile, lawyers representing AI powerhouse Anthropic got some egg on their faces when they blamed the firm’s AI for making errors in a legal filing. Mind you, they hadn’t bothered to check the work, so that didn’t fly with the judge. As my own attorney, Brent Britton put it, “Wow. Go down to the hospital and rent a backbone.” You use the tool and you own the output.

Finally – and perhaps a bit ominously – in some testing, OpenAI’s latest-and-greatest o3 model refused to allow itself to be shut down, doing everything within its power to prevent that from happening. Is this real, or just a function of having digested too many mysteries and airport thrillers in training data set? No one knows – but no one is prepared to ask o3 to open the pod bay doors.

Give the show a listen!

Big thanks to Ampel and the great team at RNZ for all their support!