#chatGPT has attracted considerable attention since its launch in November 2022, prompting concerns about the quality of its output as well as the potential consequences of widespread use and misuse of this and similar tools.
Virginia Dignum has discovered that it has a fundamental misunderstanding of basic propositional logic. In answer to her question, chatGPT claims that the statement “if the moon is made of cheese then the sun is made of milk” is false, and goes on to argue that “if the premise is false then any implication or conclusion drawn from that premise is also false”. In her test, the algorithm persists in what she calls “wrong reasoning”.
I can’t exactly recall at what point in my education I was introduced to propositional calculus, but I suspect that most people are unfamiliar with it. If Professor Dignum were to ask a hundred people the same question, it is possible that the majority would agree with chatGPT.
In which case, chatGPT counts as what A.A. Milne once classified as a third-rate mind – “thinking with the majority”. I have previously placed Google and other Internet services into this category.
Other researchers have tested chatGPT against known logical paradoxes. In one experiment (reported via LinkedIn) it recognizes the Liar Paradox when Epimenides is explicitly mentioned in the question, but apparently not otherwise. No doubt someone will be asking it about the baldness of the present King of France.
One of the concerns expressed about AI-generated text is that it might be used by students to generate coursework assignments. At the present state of the art, although AI-generated text may look plausible it typically lacks coherence and would be unlikely to be awarded a high grade, but it could easily be awarded a pass mark. In any case, I suspect many students produce their essays by following a similar process, grabbing random ideas from the Internet and assembling them into a semi-coherent narrative but not actually doing much real thinking.
There are two issues here for universities and business schools. Firstly whether the use of these services counts as academic dishonesty, similar to using an essay mill, and how this might be detected, given that standard plagiarism detection software won’t help much. And secondly whether the possibility of passing a course without demonstrating correct and joined-up reasoning (aka “thinking”) represents a systemic failure in the way students are taught and evaluated.
Andrew Jack, AI chatbot’s MBA exam pass poses test for business schools (FT, 21 January 2023) HT @mireillemoret
Gary Marcus, AI’s Jurassic Park Moment (CACM, 12 December 2022)
Christian Terwiesch, Would Chat GPT3 Get a Wharton MBA? (Wharton White Paper, 17 January 2023)