Artificial intelligence may be amazing but it’s always imperfect. Anyone trying to use AI professionally lands on the horns of a dilemma: will a productivity increase gained through automation represent any savings, after factoring in the extra supervision needed to use these amazing (but unreliable) new tools? In our first episode we chat with Drew Smith, co-founder of Wisely AI, a firm dedicated to helping businesses use AI safely and wisely. (I was the other co-founder!) What did we learn from clients trying to put AI to work – but only rarely finding the tools on offer fit for purpose?
Are AI therapists safe? Can kids use ChatGPT to cheat ADHD assessments? When will lawyers stop blaming AI for their errors – and what happens when an AI says, “I’m sorry, Dave…” We covered all of these topics on RNZ’s “Nine To Noon” – and much more.
In conversation with host Kathryn Ryan, we explored the recently emerging phenomenon of ‘ChatGPT Psychosis‘ – can ‘sycophancy‘ in AI chatbots risk a danger that they amplify mental illnesses? Should anyone be using an AI chatbot for therapy? That’s certainly what Mark Zuckerberg wants to deliver, with a therapist bot for every one of his billions of users – but mental health professionals are unified in their call for caution, particularly for those under the age of 18.
Those kids under 18 have been cheating ADHD assessments for some time – using notes gleaned from books and article online. But a recent study showed that kids who used ChatGPT actually scored significantly better in their ability to ‘fake’ symptoms during their assessment. The cheating crisis has now hit medicine, and will force a reassessment of how they assess medical conditions.
Meanwhile, lawyers representing AI powerhouse Anthropic got some egg on their faces when they blamed the firm’s AI for making errors in a legal filing. Mind you, they hadn’t bothered to check the work, so that didn’t fly with the judge. As my own attorney, Brent Britton put it, “Wow. Go down to the hospital and rent a backbone.” You use the tool and you own the output.
Finally – and perhaps a bit ominously – in some testing, OpenAI’s latest-and-greatest o3 model refused to allow itself to be shut down, doing everything within its power to prevent that from happening. Is this real, or just a function of having digested too many mysteries and airport thrillers in training data set? No one knows – but no one is prepared to ask o3 to open the pod bay doors.
Give the show a listen!
Big thanks to Ampel and the great team at RNZ for all their support!
In our world, you flip a coin and it comes up either heads or tails. But in the spooky quantum world – that’s everything from a single atom all the way up to a small virus – that coin can come up both heads _and_ tails, depending on how you read it. So which is it? Heads? Tails? Both? Neither?
One of the experimental setups used to read qbits
Welcome to the strange world of quantum computing where this both-true-and-false ‘superposition’ allows quantum computers to vastly outperform their ‘classical’ peers (such as the one in your smartphone).
A string of ‘entangled’ qbits
At least, that’s the theory.
Quantum computers are so unstable they tend to self-destruct before we can get them to run a program!
Researchers Claire Edmunds and Virginia Frey from the University of Sydney’s Quantum Control Laboratory join us to explore this new quantum frontier: The deeper you go, the weirder it gets over the next billion seconds.
IBM Institute for Business Value Report on Quantum Cybersecurity – what happens after quantum computing breaks all the encryption we use on the Web to keep our information secure and private?
And since you’re going to need a quantum computer to run this program, here’s the IBM Q Quantum Experience (5 qubit device available publicly on the cloud) – a REAL quantum computer you can run your own experiments on!