Future Psychohistory – News

News & Ruminations

Here are some newer happenings and considerations since the article was written in 2011. Also, I manage a small LinkedIn group that discusses Asimov and Psychohistory

Neural Networks, Really?
In 2011, this was the technology that was most on my mind, so it’s not surprising that I named it specifically as the probable base of a model. Since then, I have increasingly leaned towards agent-based models. The problem is that even if I was to write the article anew, I would probably stick with neural networks only because the very idea of billions of intelligent, learning, interacting agents is still too far “out there”. I don’t like being laughed at, especially by software engineers I respect and admire, so computational psychohistory is fanciful enough for now. However, IBM recently announced a “neurosynaptic” chip with 5.4 billion transistors arranged as 1 million neurons and 256 million synapses, so who knows?

Haskell? Really?
I’m aware of the criticism that it’s contradictory to say that the formula (mathematics) is being supplanted by the algorithm (computation) on one hand and yet recommend a very formal, mathematical computer language (Haskell). I will stick with Haskell however, for three main reasons.

First, the theme of the article was that Asimov’s original mathematical psychohistory is perhaps more realizable as computational psychohistory, and therefore any computational language, even a formal one such as Haskell, is the essential, big step. The rest is semantics.

Second, a project such as this redefines the concept of scalability in computer programs. The system being considered here would begin with billions of intelligent agents and quickly increase exponentially with learning. This enforces formality because only machines could tackle such a chore. More human-friendly, pragmatic languages (such as scripting) would quickly be left in the dust.

Third, the concept of ‘laziness’ is crucial, and not often stressed in computer science.
“Laziness in doing stupid things can be a great virtue”
– Chang in Lost Horizon

“… as soon as it could have…”
The early appearance of life on Earth is well established. Recently, the Kepler mission discovered a planetary system that’s almost three times the age of our solar system.

Big Data is not nearly big enough
There has been much written in regards to big data analysis for prediction. However, sifting and massaging massive databases is how one might find nuggets, not direction. What I proposed in the article was a simulation approaching nature itself in detail and complexity – perhaps a good name would be “reality data”.

GPU Programming
In the article, I used GPU as a euphemism for “F—ing big pile of transistors”. Programming GPUs is really a very restrictive, synchronous process. However, the first attempts at GPU general programming languages are beginning to emerge. Not surprisingly, they seem to use the Functional Programming paradigm, just as I predicted 🙂

One of the first I’ve seen is called Harlan (I hope that’s for Harlan Ellison)

Dawkins, Dennett, Hofstadter, Asimov, and me
(Yes, the name list is intended to be humorous. I am not worthy of washing the socks of any of the four names listed before mine.) Hofstadter and Dennett have been criticized for reifying Dawkins’ meme metaphor (that ideas are like genes). I too have been rebuked (well, laughed at actually) for doing the same thing with Asimov’s psychohistory. I’m glad I put the non-advocacy line in the introduction, and I take every opportunity to remind people that Asimov died long ago, I never met him, and he wouldn’t have had any reason to even ask me for the time of day.

Stan: A Language for Bayesian Inferencing
Obviously I’m a big fan of Bayesian methods. A new language promises to deliver such capability into the hands of ordinary developers:

Common Sense: A Tough Nut to Crack
I made the point that common sense is the crux of the matter. There’s just so much tacit and temporal knowledge involved with anything like human intelligence, that machines have a long, long way to go. An example is this recent, very powerful AI that can beat many classic arcade games, but not one of the earliest and simplest:
DeepMind and Pac-Man

January 24, 2016 Marvin Minsky dies
“If you just have a single problem to solve, then fine, go ahead and use a neural network.”
– Marvin Minsky

Machine Learning Chips Begin to Arrive
Application Specific Integrated Circuits (ASICs) and Field Programmable Gate Arrays (FPGAs) may offer major improvements in machine learning speed and power. Also, the easing of the demand for near-infinite accuracy/reliability harkens back to that Lost Horizon line again. Remember that Kirk could beat Spock, even at chess.

Expanding the Psychohistory Model Beyond Humanity
In reading Asimov’s original description of the Prime Radiant, perhaps the most compelling aspect is its zoom capability. Increase or decrease in resolution of events and timelines is of course done at the familiar human scale.

This is no longer a technical limitation. The speed of light is a large number, but it is not infinite. Space-time can be resolved to almost arbitrary resolution using new technology (subject to Planck, Heisenberg, and sensors), and to fully arbitrary precision using computation. We learn a lot by scaling time (eg high speed photography, or a single book that summarizes the Peloponnesian War). If psychohistory is expanded to model nature in general, not just humanity…
Just sayin’

The Hubris of Neuroscience
I contend that computation >> neuroscience. How far would our modern, much-vaunted methodology get in understanding a simple microprocessor? Not very far…
Could a neuroscientist understand a microprocessor?

DIY Psychohistory
Technology is out-pacing the Ivory Tower. AI research is becoming democratized. Several important recent pulsar discoveries have been made by citizen scientists. Could computational psychohistory be far behind?

TPU Programming
I seem to have underestimated the rate of computational progress. Google has just made a 1,000 TPU cluster available on the cloud.

Computational Humanities
I have long struggled to assert the distinction between Digital Humanities and Computational Humanities. This is the first supercomputer review I’ve ever seen that explicitly mentions humanities research. Now we’re talking turkey. Or maybe Canada Goose.
The Cedar Supercomputer

“manipulate life at the genetic level”
This is the greatest technological/scientific achievement of our species so far. I even ranked it a bit higher than space travel, which is near and dear to my heart as a fan of Asimov’s fiction. If immune systems can be engineered at the individual level, it means the end of disease. It might even potentially lead to immortality.

“computational cost of inverting large matrices”
Interestingly, even this task may soon be felled by the mighty GPU.
Cutlass for CUDA
I hurriedly ended the article with a thought about “the computational cost of inverting large matrices” (I had more urgent things to get to). Again, I thought I’d have at least a decade before having to revise it. I might have miscalculated.
job one for quantum computers: boost artificial intelligence
Thanks to Vincent Boucher at https://www.linkedin.com/in/montrealai/ for the link.

Quantum Computing for Social Innovation
Solving vast social problems will require vast computers.
Asimov’s own favourite story of his was “The Last Question”, about the ultimate computer.

“humanity itself … may even be computable”
The indignation and even laughter this line has evoked has been considerable. Nice to see others having similar notions:
Art History through a Machine’s Eyes

Forget almost free transistors — how about almost free computers?
I made a big deal in 2011 about how transistors had become almost free. We should start thinking about the coming era of almost free computation in general.
IBM’s 10 cent computer

Immune Systems
My mention of immune systems on p.13 was not just a throwaway line. Complex, long-lived life forms like large mammals require an unbelievably advanced and intelligent immune system. Such systems are suitable for AI study and analysis all on their own. Machine Learning is a good start.

Gaussian Processes
I finished the article in a bit of a rush, with only a quick mention of Gaussian processes. This has more recently become a serious area of research, connecting modern AI methods like Deep NN and Bayesian inference.
Thanks to Vincent Boucher at https://www.linkedin.com/in/montrealai/ for the link.

Nanosheet Transistors
Obviously I’m into transistor theory & history. I have one word for you: 5nm

Memetics
Of the many new ideas in sociology that have come into prominence since Foundation, perhaps ‘memetics’ is the most important. Richard Dawkins’ concept that memes (tokenized ideas) can take hold, spread, and influence analogously to genes is compelling. This phenomenon is even seen in nature.

Chazelle(s)
I mentioned Bernard Chazelle in the article, mainly as an authoritative voice for the algorithms-over-formulae heuristic. Deduction and induction are in a dance. Eliminating either one would stop the music. This is the secret sauce that’s missing from most psychohistory models, from overly formal mathematical ones to facile Big Data ones. Doctrine is the mind-killer. Sometimes, flying by ‘seat-of-the-pants’ is called for, as in the first moon landing, which relied heavily on Neil Armstrong’s quick thinking. This was beautifully portrayed in the movie “First Man” (2018). The director of that movie was award-winning Damien Chazelle, Bernard’s son. You can’t make this stuff up.

Kemeny
John Kemeny, the co-inventor of BASIC, was the closest thing to an actual Hari Seldon I’ve ever known about. He worked on the Manhattan Project under Feynman and von Neumann, and studied at Princeton under Einstein and Alonzo Church.

The only reason psychology students don’t have to do more and harder mathematics than physics students is because the mathematicians haven’t yet discovered ways of dealing with problems as hard as those in psychology.
– John Kemeny

A truly brilliant mathematician, who understood that computation could be a game changer. That was really my hope when I wrote the article. Asimov was on the right track. Oh, what might have been.

Schelling
“One thing a person cannot do, no matter how rigorous his analysis or heroic his imagination, is to draw up a list of things that would never occur to him.” – Thomas Schelling

In a nutshell, this is my computational psychohistory argument. Doctrine and deductive reasoning are quite useless when face to face with Asimov’s beast. Even a Hari Seldon can only catch brief glimpses.

Shor
30 minutes in Peter Shor’s mind gives one an inkling of the potential for quantum computing of probabilities. Shor’s Algorithm

The SELDON I
After many years of rumination about computational psychohistory, I’ve finally started work on my own CPU architecture because anything COTS is way too ungainly and power hungry. Right now it’s mainly emulation and one hardware ‘chip’. The goal is to make it just powerful enough to run a separate FORTH instance on each core (agent), but simple enough (low transistor count) to put 1+ million cores on a physical chip (probably silicon). That would allow 1 billion agents (a realistic minimum for a tiny psychohistorical model) in a modest machine about the size of the old DEC VAX.