The Sliderule

Skala slide rule

A slide rule is a simple, mechanical, analog computer. It has a single moving part – the central slider (and usually a movable cursor to read results). It has etched or printed scales on one or both sides for logarithms, exponents, roots, trigonometry, and sometimes more. A slide rule uses these scales to readily perform calculations. The basic principle is using logarithms to replace multiplication with addition. They advance calculation from simply counting to exploiting the relationships between numbers.

Gilson Atlas Circular Slide Rule

Most slide rules are linear ‘sticks’ that resemble everyday rulers. There are also circular and even cylindrical slide rules that extend the linear scales, fitting equivalent calculating power into a more compact device. There are also scales on the movable bar in the middle (slider). Scales are slid into position to set up a particular calculation, and the result is obtained using a precise cursor. The resolution of the scales (and the user’s eyesight) determine the precision of calculations. It is reasonable to get several significant digits at least.

Vintage Texas Instruments Slide Rule Electronic Pocket Calculator, Model SR-50A, Red LED Display, Rechargeable Battery Pack, Made In USA, Circa 1975 (41229229075)

Today, electronic calculators have almost completely replaced slide rules. In fact, early such pocket calculators were often called ‘electronic slide rules’.

Even modern AI had to start with small beginnings: The picoXpert Story

The Abacus

Huge Abacus at Guohua Abacus Museum

Counting is one of the most powerful human capabilities. In ancient times, common objects such as pebbles were used as abstract symbols to represent possessions to be tallied and traded. Simple arithmetic soon followed, greatly augmenting the ability of merchants and planners to manipulate large inventories and operations. Manipulating pebbles on lined or grooved ‘counting tables’ was improved upon by a more robust and easy to use device – the abacus. The name ‘abacus’ comes from Latin which in turn used the Greek word for ‘table’ or ‘tablet’. The abacus also had the advantage of being portable and usable cradled in one arm, a harbinger of the pocket calculator.

Variants included Roman, Chinese, Japanese, Russian, etc. Most commonly, they worked in base 10 with upper beads representing fives and lower beads representing ones. Vertical rods represented powers of 10, increasing right to left. Manual operation proceeded from left to right, with knowing the complement of a number being the only tricky part (eg. the complement of 7 is 10-7=3). The basic four operations + – x ÷ were fairly easy to learn, mechanical procedures. One did not have to be formally educated to learn to use an abacus, as opposed to pencil and paper systems. This was computation for the masses.

The abacus is comprised of beads and rods, grouped together into several ‘stacks’. The stack is the central object in concatenative programming languages, such as Forth. These are very well-suited for teaching and learning computational thinking.

The abacus was one of the most successful inventions in history. In fact, it’s still in use today, mostly in small Asian shops. It is a universal and aesthetic symbol of our ancient love of counting.


Building a solid foundation in the early years of a child’s life will not only help him or her reach their full potential but will also result in better societies as a whole.
– Novak Djokovic

I’m not a starry-eyed Isaac Asimov fanboy. He had his warts. But his life mattered in the big scheme of things. I like Asimovians. To me, an Asimovian is a skeptical optimist with deep scientific, historical, and sociological erudition. Some are novices, some are students, some are teachers, and some are leaders. Others are Forrest Gump types who just stumbled into a few of the right rooms, or who read “Foundation” because it was only $1 in their rural school’s bookmobile 🙂

I clearly remember the day. It was blue-sky late spring in rural Ontario, just before the end of the school year. I stood near the front of the bookmobile, on its teetering floor, with “Foundation” in one hand, and “Foundation and Empire”, “Second Foundation”, and $2 (birthday money) in the other. Each had a sticker price of $1. “Foundation” was the thinnest, which seemed unfair, as I had no choice on #1. The arithmetic was heart-breaking. As I struggled to choose between #2 and #3, their covers alternately calling to me, the book lady said, “Those are buy two get one free.” I quickly plunked down my $2 (tax was either included or exempt, I don’t remember which) and ran from the bookmobile with the trilogy like a thief in the night.

I first read those books sitting in a tree on our farm. Tales of a course for mankind stretching into the almost unimaginably distant future. A future where science, rationalism, and humanism hold dominion. The galaxy in decline, yet enlightenment rekindled. A tiny spark of hope that grows into a vast, new, near utopia.

I read them several times over my youth and young adulthood. Hidden behind text books, in waiting rooms, while camping, wherever. Like Psychohistory itself, it wasn’t just a story, it was a guide, a ‘Plan’. Several attempts have been made at bringing the Foundation Trilogy to the screen, both large and small. They failed simply because it’s too big a story to be captured on film. It only truly lives in the imagination of the reader. Perhaps someone with enough time, money, and vision will succeed some day. I hope they don’t damage it.

Asimov died in 1992, the year I lived in Vancouver. He was on my mind as I had my first inkling of Geopense. Around 2000, still inspired by Asimov, I created an AI company with one main product. I avoided non-Asimov sequels to the story, and was slightly disappointed even with those penned by Asimov himself. They seemed a bit rushed and contrived. The only later book I liked and would recommend was the final one in the series, Foundation’s Triumph by David Brin. It had Hari Seldon as the central character amidst an epic search for that elusive utopia.

Over the years, I’ve often wondered if the book lady had lied about the price. I like to think she had. Asimovians are a resourceful lot. That’s one of the many reasons why they’ll win in the end, and a brighter future for humanity will dawn.

Adam Smith Loves Gridcoin

I’m a fan of Paul Krugman. This is not due to his economic or political views (although I always enjoyed it whenever he and George Will would square off on Sunday morning TV), or even to his 2008 Nobel Prize in economics. Rather, it is because Krugman owes, like I do, much of his life outlook to one book: Isaac Asimov’s Foundation. I’ve written about this influence, The Asimovian Epoch is a good starting point.

Five years ago, Krugman wrote an opinion piece in the New York Times with the title, Adam Smith Hates Bitcoin. Krugman argued that Smith’s “dead stock” (like gold and silver) was raising its ugly head again in modern times in the form of “virtual currency”.

Bitcoin, like other virtual currencies, is built on blockchain. This architecture enables a secure, distributed ledger to be stored as a linked list (‘chain’) of groups of records (‘blocks’) and used simultaneously on many computers, which is where most of the security comes from. It’s much more difficult to ‘hack’ thousands of computers in order to falsify a record than it is to do so on a single, centralized computer.

Modifications to the blockchain are implemented as transactions. In the case of currencies, these transactions perform tasks such as sending and receiving ‘coins’. The low level implementation of these transactions is usually done with a scripting language, and this code often looks an awful lot like Forth, one of the oldest and coolest languages in the history of computing. I’ve written a lot about Forth, Forth: A Syntonic Language is a good starting point.

The big problem with most virtual currencies is the tremendous amount of computational resources they suck in both to maintain their blockchains and to mint (‘mine’) new currency. This is a modern form of “dead stock”. Nothing truly productive comes from all this computation, only more currency. It would be better if money was left to serve its symbolic function, and the production of people, computers, and energy went into moving the world forward.

Enter Gridcoin.

Gridcoin is an open source cryptocurrency that aims at harnessing this production to assist in scientific research. It was officially launched a few months after Krugman’s opinion piece was printed in the NY Times. Instead of Bitcoin’s “proof of work” mechanism (the basis of security and ‘mining’), Gridcoin implements two newer concepts, “proof of stake” and “proof of research”.

Proof of stake (POS) is an extremely efficient mechanism for securing the network and generating new ‘coins’ based on a simple interest rate dividend. Proof of research (POR) turns all that computing power toward scientific calculations via projects hosted on the Berkeley Open Infrastructure for Network Computing (BOINC) framework. These projects range from biology to cosmology, and many interesting fields in between. This effort is closely related to computational Citizen Science, yet another subject near and dear to my heart. Using POS and POR, participants can contribute to scientific research while earning some ‘coins’ to at least offset some of their expenses for equipment and electricity. They can also learn about blockchain and virtual currencies without a scary dive into the deep end of cryptocurrencies.

Paul Krugman, Isaac Asimov, and yes, Adam Smith, would approve.

David Brin, author of Foundation’s Triumph (1999), had some interesting thoughts on Adam Smith in the context of psychohistory here.

There are other similar projects.

One final thought. Adam Smith was a key personage of the Scottish Enlightenment. Another giant of that ‘clan’ was the American founding father Benjamin Franklin (who was a friend of Smith’s). Franklin was almost as great a polymath as Asimov. When I founded the Geopense computational citizen science team years ago, and ever since, this quote has been on the main site page:

Tell me and I forget.
Teach me and I remember.
Involve me and I learn.
– Benjamin Franklin

The picoXpert Story

picoXpert was one of the first (if not THE first) handheld artificial intelligence (AI) tools ever. It provided for the capture of human expert knowledge and later access to that knowledge by general users. It was a simplistic, yet portable implementation of an Expert System Shell. Here is the brief story of how it came to be.

When I was about 10, my grandfather (an accomplished machinist in his day) gave me his slide rule. It was a professional grade, handheld device that quickly performed basic calculations using several etched numeric scales with a central slider. I was immediately captivated by its near-magical power.

In high school, I received an early 4-function pocket calculator as a gift. Such devices were often called ‘electronic slide rules’. It was heavy, slow, and sucked battery power voraciously. I spent many long hours mesmerized by its operation. I scraped my pennies together to try to keep up with ever newer and more capable calculators, finally obtaining an early programmable model in 1976. Handheld machines that ‘think’ were now my obsession.

 I read and watched many science fiction stories, and the ones that most fired my imagination were those that involved some sort of portable computation device.

By 1980, I was building and programming personal computers. These were assembled on an open board, using either soldering or wire wrap to surround an 8-bit microprocessor with support components. I always sought those chips with orthogonality in memory and register architecture. They offered the most promise for the unfettered fields on which contemporary AI languages roamed. I liked the COSMAC 1802 for this reason. It had 5,000 transistors; modern processors have several billion. The biggest, baddest, orthogonal processor was the 16- or 32-bit Motorola 68000, but it was too new and expensive, so I used its little brother, the 6809, which was an 8-bit chip that looked similar to a 68000 to the programmer.

I spent much of the 1980s canoeing in Muskoka and Northern Ontario, with a Tandy Model 100 notebook, a primitive solar charger, and paperback editions of Asimov’s “Foundation” trilogy onboard (I read them five times). Foundations.

Here are: the original brochure, an Expert Systems Primer, and a few slides.

It met with initial enthusiasm by a few, such as this review:

Handheld Computing Mobility
Jan/Feb 2003 p. 51
picoXpert Problem-solving in the palm of your hand
by David Haskin

However, the time for handheld AI had not yet come. After a couple of years of trying to penetrate the market, I moved on to other endeavours. These included more advanced AI such as Neural Networks and Agent-Based Models. In 2011, I wrote Future Psychohistory to explore Asimov’s greatest idea in the context of modern computation.

Picodoc Corporation still exists, although it has been dormant for many years. It’s encouraging to see the current explosion of interest in AI, especially the burgeoning Canadian AI scene. For those like me, who have been working away in near anonymity for decades, it’s a time of great excitement and hope. Today, I’m mainly into computational citizen science, and advanced technologies, such as blockchain, that might be applied to it.

The Asimovian Epoch

Centuries past are like the long night, a mysterious dream world. The modern, networked Earth is like daybreak, a world-wide awakening.

The fifty years 1945-1995 could be considered as a sort of ‘twilight zone’ between the two. The bulk of thinking and research done during this period is too new to be of any great historical or even deep nostalgic interest. Yet it is too old to be treated as current knowledge in the Internet age, where freshness and novelty are valued. In addition, we are still inside several of the epic transformations that began in this time. It’s difficult to appreciate and comprehend a revolution when you live within it. Our blind spot for this period is a great tragedy.

This period could be called the ‘Asimovian Epoch’. Here’s why.

Isaac Asimov (1920-1992) was perhaps the last great polymath. He was a biochemistry professor and author, publishing (writing or editing) over 500 books (covering 90% of the Dewey Decimal Classification system) and many thousands of shorter works. That’s an average rate of over one book per month over his long career – one of the greatest outbursts of creativity in history. His favorite topics included history, physics, comedy, and English literature. He was quite arguably the greatest science fiction writer of all time. He, Robert Heinlein, and Arthur C. Clarke are known as ‘The Big Three’ of the Golden Age of science fiction.

Asimov was a nexus of 20th century thought.

He was strongly influenced by Greek and Roman classics, Shakespeare, Gilbert & Sullivan, and also by contemporaries such as Kurt Vonnegut, Heinlein, Clarke, Harlan Ellison, Carl Sagan, and Marvin Minsky. He in turn influenced the likes of Paul Krugman (Nobel Prize in Economics), Elon Musk (SpaceX, Tesla), most science popularizers (happily now becoming too numerous to list), virtually all science fiction writers, and of course millions of readers.

Optimism came early to Asimov, perhaps because he spent so much time as a child in his father’s candy store. Many were initially drawn to science as teenagers by his optimistic and compelling vision of science and technology. Given the advances that 20th century science brought, such as space exploration, computation, automation, microbiology, etc, our debt to him on this account alone is inestimable. His masterpiece, Foundation, has deeply influenced many others, including me: Future Psychohistory

It has become fashionable for some to be suspicious of science and even liberal education. This was a trend that Asimov fought tirelessly against all his life. He warned against the false notion that democracy means that ‘my ignorance is just as good as your knowledge’

He went even further, exploring the frontiers of individuality and self-education:

Self-education is, I firmly believe, the only kind of education there is.

Some even credit Asimov with predicting the Internet, especially its use for personalized education. He envisioned a world in which people could pursue their own interests, unfettered by the standard classroom, open to new ideas and serendipity.

Ironically, the very technology that he hoped would spur individuality and diversity has the opposite effect at times. One example is digitization and archiving of published material. There was a time when books were expensive, highly personal possessions that stayed with the owner for a lifetime, and were even sometimes handed down across generations. People would store letters, pressed flowers, and other snippets of life between their pages, and write personal margin notes. Books were not just repositories of language, they were time capsules and valuable historical accounts. However, the best book to scan is a pristine, un-personalized volume. Once it is scanned, future researchers use this one version increasingly exclusively. Individuality and diversity are squelched.

A pervasive basic understanding of science was Asimov’s constant goal. He often took the opportunity to describe science not as a storehouse of absolute truth, but rather as a mechanism for exploring nature. Science is not a dogmatic belief system. Learning happens when one asks questions of the universe and then listens attentively and objectively with the ears of science. He wrote eloquently on scientific models, logic, and inference. An example is this essay.

Of course Asimov is best known for his robot stories, and his ‘Three Laws of Robotics’. In 2010, the U.S. Congress established National Robotics Week in his honor. It’s possible that he saw artificial intelligence as a way for intellect to exceed the bounds of human bias, subjectivity, and mortality.

I never met Isaac Asimov, but I have spoken to several people who did have that pleasure. Each one related the profound impression he made on them. He died in 1992, but his books continued to be published posthumously. By the end of 1994, the World Wide Web Consortium had been founded at MIT. The dramatic rise of the Web marks the end of the Asimovian epoch.

Academia can sometimes degenerate into a vicious paper chase, with Ego in the front seat, Science in the back seat, and Humanity locked in the trunk. Asimov famously said,

The most exciting phrase to hear in science, the one that heralds new discoveries, is not ‘Eureka!’ but ‘That’s funny…’

Sadly, with hyper-specialization and the loss of interdisciplinary thinkers like Isaac Asimov, the catch-phrase of science has increasingly become: “Oooh! Shiny!”

Ancient Metaphors

New nations often use comparisons and metaphors to build a more familiar and solid framework or ‘story’ using the past, especially to evoke visions of ‘glory days’. One example is the symbolism of early America harkening back to ancient Greece and Rome. The founding fathers often learned Greek, Latin, and much of the history of those cultures in their formative years. We can still see that influence today in the architecture of capital government buildings.

It should come as no surprise then, that the same thing happens with computers and the insatiable need for new descriptors and naming conventions. Much of this framework has sprouted up within the last few decades. That doesn’t leave enough time for growing familiarity organically. Past language and literature provide a rich crop ready for immediate harvesting.

Here are two examples:


This epic poem is one of the earliest known pieces of vernacular English literature. The story is about a brave hero who slays monsters in the north of Europe sometime around the 6th century. It is a tale of great strength, difficulties overcome, and distances traveled. The fame of this poem (especially the Old English version) has grown steadily over the last two centuries. It has even been reproduced in literature, popular media, and the cinema. This provides a ready-made metaphor that can be used by something new.

In 1994, Thomas Sterling and Donald Becker built a computer at a contractor to NASA. It was the prototype for what would become the “Beowulf Cluster” architecture. This type of machine uses modest, commodity, even oddly matched PCs connected together to form a single parallel computer. It has acquired a rather stoic and renegade reputation in academia, where it has been used to gain entrance to the exclusive supercomputer club ‘on-the-cheap’. The name is said to have been inspired by the line in the epic poem: “thirty men’s heft of grasp in the gripe of his hand”.


Another old epic poem is Dante Alighieri’s “Divine Comedy”. It is an allegory for the soul’s travels and a map of 14th century spirituality. It is as well a multi-dimensional exploration of sensitive/rational human nature and psychology. Again, this work is one of the most studied pieces of literature in history. It is another ready-made treasure trove of meaning and metaphor.

The first of its three parts is “Inferno” (hell). It is graphic and horrifying. It is also strangely open-minded and thoughtful in that figures from classical literature are woven into a Christian narrative. These include Socrates, Plato, and indeed Virgil, Dante’s guide. Perhaps the most apt adjective for “Inferno” is: deep.

“Inferno” was recently used as a template for a “Software Inferno” describing “sinners against software” and their torments (1). By using this template, the author begins with an extensive toolbox of metaphors. Of course, neither “Software Inferno” nor this short blog post should be used as an authoritative representation of Dante. There are plenty of good academic sources for that.

Instead of Virgil, Bell uses the Countess of Lovelace as our guide. More commonly known as Ada Lovelace, she was a mathematician and the daughter of Lord Byron, and is widely considered as the world’s first computer programmer for her work with Charles Babbage. She leads our hapless traveler down through the Inferno, towards the Earth’s core, along a roughly cone-shaped set of circular slices:

  • ANTE-INFERNO developers who couldn’t decide whether to use their skills for good or evil, denied entry to both Heaven and Hell, doomed to eternal obscurity
  • LIMBO developers born too early in history to know of proper software engineering
  • LUST developers who chose power and fame over commitment and responsibility, they gave their time to their computers instead of their family, they sought the spotlight, now they are blinded by its glare for eternity
  • GLUTTONY developers with human heads and pig bodies, IPO and stock bubbles had led them into wasteful, gross over-consumption
  • GREED IT types who gathered every crumb of profit for themselves, neglecting to, share anything with the people who got them there, condemned to a cacophony of falling pennies forever
  • ANGER realm of vitriol and negativity, with inhabitants broiled by burning paper, the only water available had been poisoned just like their own workplaces
  • HERESY developers who knew of good software engineering, yet refused to use it, now chaotically and aimlessly going their own way for all time
  • VIOLENCE developers who had inflicted malware, viruses, and phishing on the public, this pestilence now torments them in turn, and even sleep offers no escape
  • FRAUD techno-hucksters, advocates, evangelists who had foisted crappy software, standards, and models upon desperate users who longed for guidance, they now dwelt in sewage for all time
  • TREACHERY those who had enabled and encouraged this fraud, doomed to keep shoveling this detritus forever

Although intended for an audience of computer developers, the macabre humour is widely understood thanks to the template provided by Dante.

Here are a few other names for computer technology borrowed from the past:
Apollo, Athena, Delphi, Hercules, Hermes, Homer, Hydra, Jason, Janus, Merlin, Midas, Oracle, Odyssey, Phoenix, Phoebe, Pegasus, Sisyphus, Tantalus, Troy, Ulysses, Valhalla, Valkyrie, Zeus
… and of course a whole host of planets, moons, stars, and constellations.

(1) Bell, A.E. (2014). The Software Inferno. Communications of the ACM, 57(1), 48-53.

Scientific Models

I get nervous when people enthusiastically advocate models. This happens mostly, though not exclusively, in sociopolitical debates such as climate change. A scientific model is not a belief system. All those who claim that science is on their side might better consider whether they are on science’s side. Most scientific models, especially statistical ones, are ways to measure and predict the behaviour of a bounded system (a subset of nature). Such a model does not claim to be the final, fundamental, and ultimate description of nature itself. It’s important to not confuse the map with the territory, since rabbit holes rarely appear on maps.

In fact, even a major shift in the underlying paradigm may only correspond to a small change in a model’s accuracy. A very good essay on this was penned by Isaac Asimov. In short, he argued that models may be updated with more subtlety and scope, but they are not summarily discarded as simply and absolutely wrong. Long after we replaced force with space-time geometry in our gravitational models, we still design elevators with Isaac Newton looking over our shoulder. We are all very much still ‘Flat Earthers’ when we plan a walk (unless that walk will take us across a time zone boundary or we sneak a peek at our GPS location along the way).

In this view, science is models and observations only. A theory is only a specific model, not a deep insight into the true nature of reality. Nature simply is. That’s it. Our beliefs about its fundamental structure are illusions. The meaning of observations is model dependent, but the observations themselves are not. Antares is in the same position in the sky, whether it is a red supergiant or “the heart of the scorpion”.

It should be noted that not everyone agrees with this view. For example, Thomas Kuhn argued that newer models are not just ‘better approximations’ (straw man alert) and that it is a serious mistake to teach that they are. He argued that consensus of the scientific community is also a component of scientific truth. I don’t happen to share that view, but then as I consider consensus to be irrelevant in science, my opinion means naught anyway. Of course paradigm shifts and revolutions do happen, but they are artifacts of history and psychology, not objective science. It’s a fallacy to define science as being only the set of all scientists, past and present. Again, there’s that map and territory confusion.

Raw predictive power alone is not the only metric for judging a model. Probably the most universal heuristic is agreement with observation. Many a compelling model has been ruined by the facts. Mathematical beauty, symmetry, and ‘feel’ are not important. Elegance and simplicity are helpful, perhaps even necessary, but not sufficient conditions for a good model. Enthusiasm for a model is a strong cause for suspicion. Humility before nature is a much better heuristic. Darwin and Planck had to be dragged along reluctantly with their own theories. Models can incorporate ‘tricks’ that are obviously not part of reality. For example, adding an equal ‘negative mass’ allows one to easily calculate the centre of mass of a plate with a hole in it. Perhaps the best thing a model can do is to not just answer questions, but to ask even better ones. Heliocentricity and the microbe theory of disease launched tremendously fruitful new science.

Nature is a realm of computation and evolution on an unimaginable scale (quite literally). Local agency leads to emergent complexity. Mathematics is one of the tools that can allow a vastly simplified model of nature to be stored in a three-pound hominid brain. However, daisies and snails know nothing about mathematics. Anthropomorphization of nature is not science. Consensus and comfort are not science. Clinging to teddy bears is really cute in youngsters, not so much in adults. The good news is that nature is far, far, far more wonderful than anything any individual or advocacy group can come up with (including well-crafted teddy bears). You just have to let objectivity take the wheel.