Subscribe to Love Mondays newsletter updates via email »
LM: #357: LLMs are not human
I do not in the least feel that ChatGPT, or any other LLM, is human.
Sure, I tell it please, thank you, and what the f*ck was that? but I don’t think of it as conscious.
I instead see LLMs as magical LEGO-brick organizers: I ask a question, it goes through almost the entire corpus of human knowledge to mix and match “bricks” – that is, words – in the most-average way appropriate for that question.
WANT TO WRITE A BOOK?
Download your FREE copy of How to Write a Book »
(for a limited time)
And as far as I can understand the technical explanations of it, that seems to be exactly what it does.
I also find this an effective mental model for thinking about what questions for which I can and can’t expect good answers. The more that’s been written about a subject, the more bricks there are, the better the chances the answer will make sense and be accurate. (If it matters, I always ask for sources.)
So if I’m looking for advice on how to formulate my next batch of homemade soap, there’s a lot already written about that. If I want it to write one of these emails, forget about it.
This is probably why the most valuable use case I’ve come across is understanding historical information, and stress-testing my own explanations. It’s been an invaluable conversation partner for writing about Leonardo and Raphael. I would probably have had to get a degree to otherwise learn all I have.
An LLM is no more human than a book (if anything, less so).
Aphorism: “Become the best in the world at what you do. Keep redefining what you do until this is true.” —Naval Ravikant
Book: Innovators (Amazon) is David Galenson’s follow-up to Old Masters, Young Geniuses.
Best,
David
