27 JUL 2018 by ideonexus

 Shannon and Thorp Hack the Roulette Wheel

It was in this tinkerer’s laboratory that they set out to understand how roulette could be gamed, ordering “a regulation roulette wheel from Reno for $1,500,” a strobe light, and a clock whose hand revolved once per second. Thorp was given inside access to Shannon in all his tinkering glory: Gadgets . . . were everywhere. He had a mechanical coin tosser which could be set to flip the coin through a set number of revolutions, producing a head or tail according to the setting. As a joke...
Folksonomies: play hacking gambling
Folksonomies: play hacking gambling
  1  notes
 
27 JUL 2018 by ideonexus

 Shannon's Learning Mouse Theseus

Theseus was propelled by a pair of magnets, one embedded in its hollow core, and one moving freely beneath the maze. The mouse would begin its course, bump into a wall, sense that it had hit an obstacle with its “whiskers,” activate the right relay to attempt a new path, and then repeat the process until it hit its goal, a metallic piece of cheese. The relays stored the directions of the right path in “memory”: once the mouse had successfully navigated the maze by trial and error, it ...
  1  notes
 
27 JUL 2018 by ideonexus

 Redundancy of English language is a Goldilocks zone for C...

In Shannon’s terms, the feature of messages that makes codecracking possible is redundancy. A historian of cryptography, David Kahn, explained it like this: “Roughly, redundancy means that more symbols are transmitted in a message than are actually needed to bear the information.” Information resolves our uncertainty; redundancy is every part of a message that tells us nothing new. Whenever we can guess what comes next, we’re in the presence of redundancy. Letters can be redundant: be...
  1  notes
 
22 NOV 2017 by ideonexus

 Removing Prepositions in Defining Thought

Having turned my back on propositions, I thought, what am I going to do about this? The area where it really comes up is when you start looking at the contents of consciousness, which is my number one topic. I like to quote Maynard Keynes on this. He was once asked, “Do you think in words or pictures?” to which he responded, “I think in thoughts.” It was a wonderful answer, but also wonderfully uninformative. What the hell’s a thought then? How does it carry information? Is it like ...
  1  notes
 
01 JAN 2010 by ideonexus

 Fundamental Names in Computer Science

Consider some fundamental names: Turing (computation theory and programmable automata), von Neumann (computer architecture), Shannon (information theory), Knuth, Hoare, Dijkstra, and Wirth (programming theory and algorithmics), Feigenbaum and McCarthy (artificial intelligence), Codd (relational model of databases), Chen (entity-relationship model), Lamport (distributed systems), Zadeh (fuzzy logic), Meyer (object-oriented programming), Gamma (design patterns), Cerf (Internet), Berners-Lee (WW...
  1  notes

The author uses this list as proof that computer science can be an inductive discipline, but a list of successes is useless for this argument. All of these "fundamental names" are such because their theories were proven in the real world. It's a selective list. We need to see a list of all theorists and then gauge how well induction works versus empiricism.

It does make a good list of big names and their contributions.