Don’t forget all the things that a core team performs to a tee, but that you never see

The third ‘fragmentation wave’ of the IT-revolution is upon us, it seems. Fragmentation is a repeated pattern in the IT-revolution, that has given us object oriented programming and agile/DevOps as solutions to managing complexity. Now, it is the organ…

Ain’t No Lie — The unsolvable(?) prejudice problem in ChatGPT and friends

Thanks to Gary Marcus, I found out about this research paper. And boy, is this is both a clear illustration of a fundamental flaw at the heart of Generative AI, as well as uncovering a doubly problematic and potentially unsolvable problem: fine-tuning …

Memorisation: the deep problem of Midjourney, ChatGPT, and friends

If we ask GPT to get us “that poem that compares the loved one to a summer’s day” we want it to produce the actual Shakespeare Sonnet 18, not some confabulation. And it does. It has memorised this part of the training data. This is both sought-after an…

What makes Ilya Sutskever believe that superhuman AI is a natural extension of Large Language Models?

I came across a 2 minute video where Ilya Sutskever — OpenAI’s chief scientist — explains why he thinks current ‘token-prediction’ large language models will be able to become superhuman intelligences. How? Just ask them to act like one.

Artificial General Intelligence is Nigh! Rejoice! Be very afraid!

Should we be hopeful or scared about imminent machines that are as intelligent or more than humans? Surprisingly, this debate is even older than computers, and from the mathematician Ada Lovelace comes an interesting observation that is as valid now as…

The hidden meaning of the errors of ChatGPT (and friends)

We should stop labelling the wrong results of ChatGPT and friends (the ‘hallucinations’) as ‘errors’. Even Sam Altman — CEO of OpenAI — agrees, they are more ‘features’ than ‘bugs’ he has said. But why is that? And why should we not call them errors?