Copyright theage

To give a very personal example, I did something I’ve never done before after receiving your email. I gave two well-known LLMs the basic parameters for writing this column – and then asked them to respond to a typical (but not real) Work Therapy question. I wish I could say what they came up with was robotic nonsense – two years ago it might have been. In October 2025, however, they both produced something genuinely interesting and potentially very helpful, replete with a few lovely turns of phrase. It’s simply a fact that LLMs still produce wildly inaccurate, sometimes farcically silly, responses to questions or task requests. One had a couple of sentences that seemed a touch off (I’d go so far as to say grammatically wrong as the AI tried to force a word I’d used in the prompt into the response), but the other was harder to fault. But, no, LLM-based chatbots are not so advanced that they have become modern-day Delphic Oracles: sources of god-like wisdom and unimpeachable advice.