AI as Longterm Memory
The current state-of-the-art Gemini model can fit roughly 1.5 million words in its context. That’s enough for me to upload the full text of all fourteen of my books, plus every article, blog post, or interview I’ve ever published—and the entirety of my collection of research notes that I’ve compiled over the years. The Gemini team has announced plans for a model that could hold more than 7 million words in its short-term memory. That’s enough to fit everything I’ve ever written, plus the hundred books and articles that most profoundly shaped my thinking over the years. An advanced model capable of holding in focus all that information would have a profound familiarity with all the words and ideas that have shaped my personal mindset. Certainly its ability to provide accurate and properly-cited answers to questions about my worldview (or my intellectual worldview, at least) would exceed that of any other human. In some ways it would exceed my own knowledge, thanks to its ability to instantly recall facts from books I read twenty years ago, or make new associations between ideas that I have long since forgotten. It would lack any information about my personal or emotional history—though I suppose if I had maintained a private journal over the past decades it would be able to approximate that part of my mindset as well. But as reconstruction of my intellectual grounding, it would be unrivaled. If that is not considered material progress in AI, there is something wrong with our metrics.
Having a “second brain” like this—even with a few million words of context—is enormously useful for me personally. When I’m on book tour, I often tell people that publishing a book is a kind of intellectual optical illusion: when you read a book, it seems as though the author has command of an enormous number of facts and ideas—but in reality, the book is a condensation of all the facts and ideas that were in his or her mind at some point over the three years that it took to write the book. At any given moment in time, my own knowledge and recall of the full text of a book I’ve written is much more like a blurry JPEG than an exact reproduction. And my available knowledge of books that I wrote ten or twenty years ago is even blurrier. Now that I have so much of my writing and reading history stored in a single notebook—which I have come to call my “Everything” notebook—my first instinct whenever I stumble across a new idea or intriguing story is to go back to the Everything notebook and see if there are any fertile connections lurking in that archive. That is, in fact, how I got to the story of Henry Molaison that I began with; I was mulling over the themes of short- and long-term memory in the context of AI, and asked the Everything notebook if it had anything to contribute, and the model reminded me of the tragic tale of patient H. M. that I had first read about in the 1990s. Who, exactly, made that connection? Was it me or the machine? I think the answer has to be that it was both of us, via some newly entangled form of human-machine collaboration that we are just beginning to understand.
Notes:
Folksonomies: cognition ai cognitive prosthesis
Taxonomies:
/art and entertainment/shows and events (0.742617)
/family and parenting/children (0.729157)
/technology and computing/hardware/computer components (0.700587)
Concepts:
Blog (0.974460): dbpedia_resource
Knowledge (0.964862): dbpedia_resource
Reality (0.922879): dbpedia_resource
Mind (0.892856): dbpedia_resource
History (0.884806): dbpedia_resource
Publishing (0.883758): dbpedia_resource
Research (0.833119): dbpedia_resource
World view (0.784011): dbpedia_resource