22 NOV 2024 by ideonexus

 AI as Longterm Memory

The current state-of-the-art Gemini model can fit roughly 1.5 million words in its context. That’s enough for me to upload the full text of all fourteen of my books, plus every article, blog post, or interview I’ve ever published—and the entirety of my collection of research notes that I’ve compiled over the years. The Gemini team has announced plans for a model that could hold more than 7 million words in its short-term memory. That’s enough to fit everything I’ve ever written, p...
  1  notes
 
29 APR 2024 by ideonexus

 Dead Internet Theory is True

Of the platforms where we know flesh and blood humans do spend enough time, 92% of all content is created by 10% of all users, and engagement with different posts can vary from 0.03% to 0.1% of all viewers. In other words, of the 52.6% of internet traffic which is human-driven, about 9 in 10 users stuck to messaging friends and family and just passively consume contents mostly meant to sell them stuff, or see if they would be interested in propaganda campaigns ran by governments, both theirs ...
  1  notes
 
22 DEC 2023 by ideonexus

 What Will be Left After the AI Bubble Pops?

Every bubble pops eventually. When this one goes, what will be left behind? Well, there will be little models – Hugging Face, Llama, etc – that run on commodity hardware. The people who are learning to “prompt engineer” these “toy models” have gotten far more out of them than even their makers imagined possible. They will continue to eke out new marginal gains from these little models, possibly enough to satisfy most of those low-stakes, low-dollar ap­...
Folksonomies: technology ai
Folksonomies: technology ai
  1  notes
 
27 MAR 2023 by ideonexus

 LLMs are Lossy Compression for the Entire WWW

To grasp the proposed relationship between compression and understanding, imagine that you have a text file containing a million examples of addition, subtraction, multiplication, and division. Although any compression algorithm could reduce the size of this file, the way to achieve the greatest compression ratio would probably be to derive the principles of arithmetic and then write the code for a calculator program. Using a calculator, you could perfectly reconstruct not just the million ex...
Folksonomies: ai llm large language model
Folksonomies: ai llm large language model
  1  notes
 
05 JAN 2023 by ideonexus

 Identifying AI Online

Before you continue, pause and consider: How would you prove you're not a language model generating predictive text? What special human tricks can you do that a language model can't? 1. Triangulate objective reality [...] This leaves us with some low-hanging fruit for humanness. We can tell richly detailed stories grounded in our specific contexts and cultures: place names, sensual descriptions, local knowledge, and, well the je ne sais quoi of being alive. Language models can decently mim...
Folksonomies: ai auto-generated content
Folksonomies: ai auto-generated content
  1  notes

There are additional tactics for differentiating ourselves from AIs, but the first two were the most interesting to me.

24 SEP 2021 by ideonexus

 The Ultraintelligent Machine will be Civilization's Last ...

The survival of man depends on the early construction of an ultraintelligent machine. Let an ultraintelligent machine be defined as a. machine that can far surpass all the intellectual activities of any man however clever. Since the design of machines is one of these intellectual activities, an ultraintelligent machine could design even better machines; there would then unquestionably be an "intelligence explosion," and the intelligence of man would be left fa.r behind see for example refs...
 1  1  notes
 
24 SEP 2021 by ideonexus

 Recursive Self-Improvement in Human Civilization

Let’s consider Arabic numerals as compared with Roman numerals. With a positional notation system, such as the one created by Arabic numerals, it’s easier to perform multiplication and division; if you’re competing in a multiplication contest, Arabic numerals provide you with an advantage. But I wouldn’t say that someone using Arabic numerals is smarter than someone using Roman numerals. By analogy, if you’re trying to tighten a bolt and use a wrench, you’ll do better than someone...
 1  1  notes
 
10 MAR 2019 by ideonexus

 Chess Concept: Running Out of Book

One of the problems with playing against computers is how quickly and how often they change. Grandmasters are used to preparing very deeply for our opponents, researching all of their latest games and looking for weaknesses. Mostly this preparation focuses on openings, the established sequences of moves that start the game and have exotic names like the Sicilian Dragon and the Queen's Indian Defense. We prepare new ideas in these openings, and look for strong new moves ("novelties") with whic...
  1  notes
 
02 MAR 2019 by ideonexus

 New Kind of Memory for AI

AI researchers have typically tried to get around the issues posed by by Montezuma’s Revenge and Pitfall! by instructing reinforcement-learning algorithms to explore randomly at times, while adding rewards for exploration—what’s known as “intrinsic motivation.” But the Uber researchers believe this fails to capture an important aspect of human curiosity. “We hypothesize that a major weakness of current intrinsic motivation algorithms is detachment,” they write. “Wherein the a...
  1  notes
 
22 NOV 2017 by ideonexus

 Top-Down Engineering of AI

The philosophers’ fascination with propositions was mirrored in good old-fashioned AI, the AI of John McCarthy, early Marvin Minsky, and Allen Newell, Herbert Simon, and Cliff Shaw. It was the idea that the way to make an intelligent agent was from the top down. You have a set of propositions in some proprietary formulation. It’s not going to be English—well, maybe LISP or something like that, where you define all the predicates and the operators. Then, you have this huge database that ...
  1  notes