Forth ‘n Chips

Toronto Skyline Wide 2014

In the frenetic, almost frantic world of modern software development, it’s easy to run right past good, solid practices. Newer is better, almost by definition. Often the worst place to be, at least career-wise, is defender of the past. A.I. is a good example. Symbolic systems are seen as stone-age and the headlong rush into generative systems and Large Language Models are a siren song. However, a moment’s pause may be in order.

Modern methods may indeed be the correct course, but we didn’t get here by chance or luck. Something came before. It takes much more than labeling things to fully grok them. Consideration from first principles can re-calibrate one’s perspective. This was a touchstone of scientists like Richard Feynman and Michael Faraday. Feynman advised that being able to create something, or at least explain it in simple terms, showed real understanding well beyond just knowing names and definitions. Faraday once said to students:

Do not refer to your toy-books, and say that you have seen that before.
Answer me rather, if I ask you, have you understood it before?

The best way to learn how to bake a cake is to actually bake a cake. Perhaps even several of them. Unfortunately, a lot of modern programming languages and frameworks lean heavily on upfront doctrine and formality. We’re asked to ‘trust the experts’ and spend months or years learning from ‘toy-books’ (sometimes written and promoted by those very same experts). That’s a very big risk, with a very long-deferred payoff. Why not first spend a small fraction of that time investing in some first principles thinking?

This brings me to a brief discussion of the Forth language. It’s been many years since I wrote any commercial code in Forth. But it’s been only a few minutes since I thought about a problem from first principles using Forth. What makes it unique, beyond all its quirkiness, Reverse Polish Notation (RPN), stack architecture, concatenative programming, extreme simplicity, etc., etc., is it’s syntonic nature. Forth enables one to think fully computationally while remaining fully human. That’s one heck of a neat trick. An hour of interactive thinking/exploration/coding in Forth can often produce the kernel of a solution, even to complex problems. Or even serendipitous treasures. No cutting & pasting, mysterious black boxes, or hyper-abstraction is needed. No team is needed. In fact, Isaac Asimov once wrote that isolation is crucial to deep thinking 1959 essay. The team will still be there in an hour, ready to work and interested in anything you can contribute.


Picture Permission of Raspberry Pi Foundation

In more extended sessions, I call this mode of thinking “Forth ‘n Chips”. Working with integrated circuits, gates, and even transistors can be quite liberating. Getting away from your routine can open your mind. Some writers enjoy using pen and paper occasionally. It doesn’t mean they’re laptop-hating Luddites. Some hockey players need to leave the video room and go for a fun skate to clear their head and regain their muscle memory. It doesn’t mean they’re unthinking brutes. It means they know how to re-calibrate.

There’s a natural, almost biological feel to Forth. Concatenation is a more reptilian way of thinking than deductive reasoning.  See here. Perhaps counter intuitively, subjective thinking can lead to stronger objective thinking. It’s a way to look a bit more carefully at what you’re rushing past.

Walking through a computer history museum, or browsing old magazines and manuals can give one perspective on the path that lead to this frenetic future. Remember that the homo sapiens that evolved on the grasslands to find food, avoid predators, and raise offspring had basically the same brain as we do today. Fashion comes and goes, but first principles remain.

GOFAI for Game Dev

Artificial Intelligence & AI & Machine Learning - 30212411048

Neural Networks, Machine Learning (ML), Deep Learning, … – that’s modern AI. These methods have produced spectacular advances in vision, natural language, pattern recognition, and many other areas. Entire academic, corporate, and national efforts have sprung up to join the race to avoid any ‘ML Gap’. In fact, the terms ML and AI have now commonly become synonymous.

A related hardware trend has taken the computing industry by storm: Massively Parallel Processing (MPP). ML, specifically the training of neural networks, greatly benefits from such hardware acceleration. This wave began with the ubiquitous Graphics Processing Unit (GPU), then expanding into devices such as the Associative Processing Unit (APU), Tensor Processing Unit (TPU), Vision Processing Unit (VPU), and even Field Programmable Gate Array (FPGA). Even tiny chips that perform dedicated monitoring and automation tasks (edge computing) are now integrating such MPP technology.

Of course, game designers have been scrambling to enhance and augment their offerings by incorporating such methods. Mainly in the realm of AI players, to better reflect human strategic thinking. Chess, Go, general knowledge, and many other specific machine players have now achieved supremacy over human players. Playing some modern games against one or more AI players is almost a ‘social’ experience.

It wasn’t always thus.

Back in the single-threaded days, those halcyon days of early PC gaming, AI had a much wider horizon. Symbolic themes such as logic, optimization, semantics, rules, expert systems, goals, and graphical knowledge representation dominated. Steps and decisions were the basis of design, and often some sort of ‘theory of mind’ existed, even if very simplistic. Logic and knowledge representation tools sprouted, such as Lisp, Prolog, and several Production Systems. Navigating decision trees, usually via some form of context awareness and backtracking capability was a process that was reassessed at each step. This was great for automating simple, explicit thought processes and playing simple games (tic-tac-toe, checkers, adventure, cards, etc). However, it fell short when confronted with the much harder areas of vision and common sense. Good Old Fashioned AI (GOFAI) went the way of the buggy whip. The lull that persisted from the late 1980s through the early 1990s is an example of an ‘AI Winter’.


But perhaps that total abandonment was a mistake. It’s easy to rush right past available treasures when blinded by shiny new objects.

Modern games are certainly beautiful to look at, with amazing realism and depth, but are they more meaningful? more playable? more fun?? Gameplay is about much more than realistic perception and rendering. This is especially true for Game Based Learning (GBL). In GBL, the firing of neurons in the player’s brain is the goal, much more than the passive presentation of eye candy.

The Human Computer Interface (HCI) has actually progressed little over the decades. Instead of the game living mostly in the player’s mind, as it did in the primitive arcade days, it is now presented almost as a movie, with entire scenes played out for them. Much of the overall experience can be obtained just by watching game demos. The most compelling feature of modern games is their community… of human players!

What if gameplay itself was the focus? Exploiting and augmenting the innate power of human intelligence. HCI enhancements could once again move the game into the player’s mind, where it belongs. Instead of pre-fab, frustrating, stultifying scenarios, the player engages with a snappy, assistive, powerful, and ‘syntonic’ game. Simple rules could be explicitly available or even player-created. The great fun of Minecraft is largely due to the process of learning and mastering such rules, which are presented in a progressive, bootstrapping way. Instead of hiding the underlying mechanisms in order to present slick, canned movies, the player could be aware of how and why things actually work. The best combat games are those that enable the player to construct their own custom devices. Sadly, this is always achieved with a rigid menu system instead of a pencil-and-paper approach. Again, the player’s own creativity is stifled or ignored. Fairly complex worlds, even player-modifiable ones, can be created in Prolog. Explicit rules and facts are both computer and human readable. Transparency and openness are the keys to truly immersive games – pulling back the Oz curtain is a good thing.

Automation is another way to greatly enhance the playing experience. Sometimes the player is forced to repeat a series of steps even though they only wish to make one or two minor changes from their last go through. Wasting time is not fun. Giving the player access to such a series to make minor tweaks would be a great time saver, which would allow the player to focus on strategy, not mundane tactics or plain drudgery. And a tiny sprinkling of assistive AI to avoid stupid mistakes could enhance gameplay. Players don’t mind AIs ‘cheating’ so much as they mind pointless glitches and ‘gotchas’. Most decent chess AIs have had this feature for decades. It’s surprising that many modern games don’t even have a simple ‘undo stack’, instead relying on frequent game saves. The best games also have an ‘advisor’ system that helps the player with decision-making in the current context, with the option to squelch advice and tips as the player gains experience. The focus should be not only on rich content, but also rich context.

There is another area where automation is valuable: testing. By providing for massive machine-play error checking and discovery of gameplay glitches and bottlenecks, Automated Testing can greatly speed and enhance development. This can include an array of techniques, from short scripts to sets of rules and goals to full-blown simulations of human players.

But here’s the biggest nugget for GOFAI in games: Expert Systems. Capturing well-structure semantic knowledge is a perfect fit for strategy games. Many simple ‘How-To’ experts that encapsulate human knowledge can be easily created, rapidly consulted, and efficiently stored without any massive frameworks or ML tools. The skill of AIs can be enhanced merely by ‘learning’ from human players. Note that this is far, far simpler than actual Machine Learning. Most rule knowledge bases take up kilobytes of RAM, not gigabytes of code and storage. A packaging system would allow players to share (or even purchase) expert systems from others. Both gameplay and AI skill could be upgraded and personalized. This brings in a robust educational component, again, like Minecraft does. Human readability is key. We all have a basic comprehension of facts, rules, and inference. There’s no daunting learning curve required. Why not incorporate this innate comprehension into games?

In summary, gameplay has become too enamored of shiny bobbles and photo-realism. The current wave of Machine Learning is leading to evermore complex and power-hungry requirements, a burden on both developers and players. GOFAI could mitigate things rapidly and easily, and shift the game experience from the eyes to the mind’s eye. Game dev expands beyond the studio, into the laboratory. Let the fun begin.

The Sliderule

Skala slide rule

A slide rule is a simple, mechanical, analog computer. It has a single moving part – the central slider (and usually a movable cursor to read results). It has etched or printed scales on one or both sides for logarithms, exponents, roots, trigonometry, and sometimes more. A slide rule uses these scales to readily perform calculations. The basic principle is using logarithms to replace multiplication with addition. They advance calculation from simply counting to exploiting the relationships between numbers.

Gilson Atlas Circular Slide Rule

Most slide rules are linear ‘sticks’ that resemble everyday rulers. There are also circular and even cylindrical slide rules that extend the linear scales, fitting equivalent calculating power into a more compact device. There are also scales on the movable bar in the middle (slider). Scales are slid into position to set up a particular calculation, and the result is obtained using a precise cursor. The resolution of the scales (and the user’s eyesight) determine the precision of calculations. It is reasonable to get several significant digits at least.

Vintage Texas Instruments Slide Rule Electronic Pocket Calculator, Model SR-50A, Red LED Display, Rechargeable Battery Pack, Made In USA, Circa 1975 (41229229075)

Today, electronic calculators have almost completely replaced slide rules. In fact, early such pocket calculators were often called ‘electronic slide rules’.

Even modern AI had to start with small beginnings: The picoXpert Story

The Abacus

Huge Abacus at Guohua Abacus Museum

Counting is one of the most powerful human capabilities. In ancient times, common objects such as pebbles were used as abstract symbols to represent possessions to be tallied and traded. Simple arithmetic soon followed, greatly augmenting the ability of merchants and planners to manipulate large inventories and operations. Manipulating pebbles on lined or grooved ‘counting tables’ was improved upon by a more robust and easy to use device – the abacus. The name ‘abacus’ comes from Latin which in turn used the Greek word for ‘table’ or ‘tablet’. The abacus also had the advantage of being portable and usable cradled in one arm, a harbinger of the pocket calculator.

Variants included Roman, Chinese, Japanese, Russian, etc. Most commonly, they worked in base 10 with upper beads representing fives and lower beads representing ones. Vertical rods represented powers of 10, increasing right to left. Manual operation proceeded from left to right, with knowing the complement of a number being the only tricky part (eg. the complement of 7 is 10-7=3). The basic four operations + – x ÷ were fairly easy to learn, mechanical procedures. One did not have to be formally educated to learn to use an abacus, as opposed to pencil and paper systems. This was computation for the masses.

The abacus is comprised of beads and rods, grouped together into several ‘stacks’. The stack is the central object in concatenative programming languages, such as Forth. These are very well-suited for teaching and learning computational thinking.

The abacus was one of the most successful inventions in history. In fact, it’s still in use today, mostly in small Asian shops. It is a universal and aesthetic symbol of our ancient love of counting.

Syntonic Scaling

“There are three things that we need to focus on as a growing organization: scalability, scalability, and scalability”. If you’re like me, you’ve heard that line at least once. The pace of modern social and technological change is staggering. Most of us struggle just to maintain balance and not be swept away by the tsunami. This is clearly visible in the nurture and guidance of a commercial startup, a new initiative in government, or really any group effort at expansion and/or adaptation. Perhaps the biggest challenge is to grow smoothly without exploding or imploding — put simply, to scale.

The trick is that we’re not honey bees. Brute force scaling is a very lossy process. Forced increase in scale can directly precipitate a reduction in scope. Perspective is lost, diversity is lost, opportunity is lost, horizons shrink, siloes are erected. Not good. Our technology is vastly more complex and complicated. A small error that might safely be ’rounded off’ in a bee colony might bring the entire system crashing down in a human-scale system. Fault tolerance and error correction must be ubiquitous and automatic.

everything fails all the time
– Werner Vogels

There are two main modes of strategic thinking: Deductive and Inductive. Deductive thought employs logic and rationality in order to understand or even predict trends and events. Inductive thought observes the emergence of the complex from the simple to do the same. Both can draw on a mix of mathematical/statistical/historical/computational analysis, although the specific mix is often quite different for the two modes. Both have the aim of making better, more informed decisions. Somewhere in between the two, like the overlapping area of a Venn diagram, is the concept of syntonicity. This is the ability to at least temporarily move one’s mindset to another place/time/perspective. It requires imagination, which is definitely, though perhaps not exclusively, a human capability.

My own area is mostly computational analysis. The vast seas of data available today long ago swamped human capabilities and now require the mechanical tools of automation. The timeline is roughly: counting to writing to gears and rotors (eg. Antikythera mechanism, Pascal calculator) to Jacquard machine (punch cards) to digital computers (vacuum tubes to transistors to Large Scale Integration to distributed computation) to quantum computers and beyond. Along the way, formal concepts were developed such as algorithms, objects, feedback (eg. cybernetics), and artificial intelligence. Many tools and languages have been developed over recent decades (then mostly out-grown), with ages and fashion passing by like scenes from an old Time Machine movie. Those of us who have enough years, enough curiosity, and enough patience, have remained engaged in the movie over the long haul. Simultaneously futurists and dinosaurs, I guess. The red plastic case of my childhood trusty pocket radio proudly boasted of its “3 Transistor” innards. Like most others, I now carry a smart phone that has a billion times that many. That’s modern life — we must scale by orders of magnitude between cradle and grave.

How can the human mind grapple with this much scaling? We evolved to find food and avoid predators in grasslands, not to hop among sub-atomic particles, swim through protoplasm, or wander intergalactic space-time. How can we explore and comprehend reality all the way from quantum mechanics to femtosecond biomolecular reactions to bacteriophages to cellular biology to physiology to populations to geology to astronomy to cosmology?
Whither scope?

Things on a very small scale behave like nothing that you have any direct experience about. They do not behave like waves, they do not behave like particles, they do not behave like clouds, or billiard balls, or weights on springs, or like anything that you have ever seen.
– Richard P. Feynman

Most programming languages are quite horrible at scaling. Scaling down is nigh-on impossible because languages have evolved to be ever bigger. Even those that claim to be lean and elegant usually require vast libraries and modules to do anything useful. Scaling up is normally accomplished by bolting on new data types, functions, and capabilities. Examples include objects, functional programming, and exotic techniques such as machine learning, thus making the language ever bigger and often more opaque. Their standardization and doctrine come at a heavy price. Much architecture and machinery is present and required ‘right out of the box’. Methodology is either implicit or actually enforced through templates and frameworks. This provides tremendous leverage when working at the human scale of activity. It squelches innovation though, when one moves too far down or up in scale.

One of the computer languages I learned and used early-on was Forth. Although I have discarded it several times over the last nearly five (!) decades, I keep coming back to it. It is a very natural almost biological language. I have also found it to be a very syntonic one. A crude definition of syntonicity is ‘fitting in’ or ‘when in Rome, do as the Romans do’. This is the key to scaling the applicability of human thought.

At its heart, Forth is incredibly tiny. It’s essentially a method for calling subroutines simply by naming them. It has a simple model for storage management and sharing: the stack. A stack is one of the oldest computational structures, perhaps going back to ancient time (the abacus, for example). However, it is brilliantly elegant. It combines elemental simplicity with tremendous functionality, a key to high scalability. The entire interpreter and compiler can be implemented in several hundred bytes. Perhaps most importantly, it can be learned, remembered, and used without a bookshelf full of manuals and references. Scaling up is unlimited and quite self-consistent; one basically bootstraps the Forth system like a growing embryo, not like a Rube Goldberg machine. Using this process, Forth can actually become fairly large and capable, see gForth. Note that scaling Forth and the underlying scale of the environment are orthogonal. The real power and utility of Forth comes from its simplicity. For example, with today’s many-core CPUs, it is possible to implement many separate, even heterogeneous, Forth engines in one computer, fully independent yet still communicating. Try that with a behemoth language or even hefty virtual machine.

Thusly, and personally, armed with the smallest possible computational toolkit, the freedom to think is restored. Researcher-programmer meetings can be cancelled. The horse can be put back in front of the cart. One can focus on grokking (grasping syntonically) the environment, physics, and inhabitants of the new scale (and thus horizons broaden again).

Of course, I’m not advocating Forth to be used for things like massive data manipulation, replacing tools like SQL, NoSQL, and beyond. Concurrency, seamless replication, automated inferencing, and vast interoperability are somewhat beyond Forth’s capability (though not entirely, suprisingly). Such tasks usually apply to teamwork. Elementary Forth is not a team language. It’s more suited to individual thought and exploration. Isaac Asimov once mused about the benefits of isolation at least in early stages. Again, we’re not honey bees.

Learning Forth is best done by using it – it’s tiny and simple to start with. If you’re more the reading type, one of the first, and best, books on Forth is Starting FORTH (1981) by Leo Brodie.


Building a solid foundation in the early years of a child’s life will not only help him or her reach their full potential but will also result in better societies as a whole.
– Novak Djokovic

I’m not a starry-eyed Isaac Asimov fanboy. He had his warts. But his life mattered in the big scheme of things. I like Asimovians. To me, an Asimovian is a skeptical optimist with deep scientific, historical, and sociological erudition. Some are novices, some are students, some are teachers, and some are leaders. Others are Forrest Gump types who just stumbled into a few of the right rooms, or who read “Foundation” because it was only $1 in their rural school’s bookmobile 🙂

I clearly remember the day. It was blue-sky late spring in rural Ontario, just before the end of the school year. I stood near the front of the bookmobile, on its teetering floor, with “Foundation” in one hand, and “Foundation and Empire”, “Second Foundation”, and $2 (birthday money) in the other. Each had a sticker price of $1. “Foundation” was the thinnest, which seemed unfair, as I had no choice on #1. The arithmetic was heart-breaking. As I struggled to choose between #2 and #3, their covers alternately calling to me, the book lady said, “Those are buy two get one free.” I quickly plunked down my $2 (tax was either included or exempt, I don’t remember which) and ran from the bookmobile with the trilogy like a thief in the night.

I first read those books sitting in a tree on our farm. Tales of a course for mankind stretching into the almost unimaginably distant future. A future where science, rationalism, and humanism hold dominion. The galaxy in decline, yet enlightenment rekindled. A tiny spark of hope that grows into a vast, new, near utopia.

I read them several times over my youth and young adulthood. Hidden behind text books, in waiting rooms, while camping, wherever. Like Psychohistory itself, it wasn’t just a story, it was a guide, a ‘Plan’. Several attempts have been made at bringing the Foundation Trilogy to the screen, both large and small. They failed simply because it’s too big a story to be captured on film. It only truly lives in the imagination of the reader. Perhaps someone with enough time, money, and vision will succeed some day. I hope they don’t damage it.

Asimov died in 1992, the year I lived in Vancouver. He was on my mind as I had my first inkling of Geopense. Around 2000, still inspired by Asimov, I created an AI company with one main product. I avoided non-Asimov sequels to the story, and was slightly disappointed even with those penned by Asimov himself. They seemed a bit rushed and contrived. The only later book I liked and would recommend was the final one in the series, Foundation’s Triumph by David Brin. It had Hari Seldon as the central character amidst an epic search for that elusive utopia.

Over the years, I’ve often wondered if the book lady had lied about the price. I like to think she had. Asimovians are a resourceful lot. That’s one of the many reasons why they’ll win in the end, and a brighter future for humanity will dawn.

Adam Smith Loves Gridcoin

I’m a fan of Paul Krugman. This is not due to his economic or political views (although I always enjoyed it whenever he and George Will would square off on Sunday morning TV), or even to his 2008 Nobel Prize in economics. Rather, it is because Krugman owes, like I do, much of his life outlook to one book: Isaac Asimov’s Foundation. I’ve written about this influence, The Asimovian Epoch is a good starting point.

Five years ago, Krugman wrote an opinion piece in the New York Times with the title, Adam Smith Hates Bitcoin. Krugman argued that Smith’s “dead stock” (like gold and silver) was raising its ugly head again in modern times in the form of “virtual currency”.

Bitcoin, like other virtual currencies, is built on blockchain. This architecture enables a secure, distributed ledger to be stored as a linked list (‘chain’) of groups of records (‘blocks’) and used simultaneously on many computers, which is where most of the security comes from. It’s much more difficult to ‘hack’ thousands of computers in order to falsify a record than it is to do so on a single, centralized computer.

Modifications to the blockchain are implemented as transactions. In the case of currencies, these transactions perform tasks such as sending and receiving ‘coins’. The low level implementation of these transactions is usually done with a scripting language, and this code often looks an awful lot like Forth, one of the oldest and coolest languages in the history of computing. I’ve written a lot about Forth, Forth: A Syntonic Language is a good starting point.

The big problem with most virtual currencies is the tremendous amount of computational resources they suck in both to maintain their blockchains and to mint (‘mine’) new currency. This is a modern form of “dead stock”. Nothing truly productive comes from all this computation, only more currency. It would be better if money was left to serve its symbolic function, and the production of people, computers, and energy went into moving the world forward.

Enter Gridcoin.

Gridcoin is an open source cryptocurrency that aims at harnessing this production to assist in scientific research. It was officially launched a few months after Krugman’s opinion piece was printed in the NY Times. Instead of Bitcoin’s “proof of work” mechanism (the basis of security and ‘mining’), Gridcoin implements two newer concepts, “proof of stake” and “proof of research”.

Proof of stake (POS) is an extremely efficient mechanism for securing the network and generating new ‘coins’ based on a simple interest rate dividend. Proof of research (POR) turns all that computing power toward scientific calculations via projects hosted on the Berkeley Open Infrastructure for Network Computing (BOINC) framework. These projects range from biology to cosmology, and many interesting fields in between. This effort is closely related to computational Citizen Science, yet another subject near and dear to my heart. Using POS and POR, participants can contribute to scientific research while earning some ‘coins’ to at least offset some of their expenses for equipment and electricity. They can also learn about blockchain and virtual currencies without a scary dive into the deep end of cryptocurrencies.

Paul Krugman, Isaac Asimov, and yes, Adam Smith, would approve.

David Brin, author of Foundation’s Triumph (1999), had some interesting thoughts on Adam Smith in the context of psychohistory here.

There are other similar projects.

One final thought. Adam Smith was a key personage of the Scottish Enlightenment. Another giant of that ‘clan’ was the American founding father Benjamin Franklin (who was a friend of Smith’s). Franklin was almost as great a polymath as Asimov. When I founded the Geopense computational citizen science team years ago, and ever since, this quote has been on the main site page:

Tell me and I forget.
Teach me and I remember.
Involve me and I learn.
– Benjamin Franklin

The picoXpert Story

picoXpert was one of the first (if not THE first) handheld artificial intelligence (AI) tools ever. It provided for the capture of human expert knowledge and later access to that knowledge by general users. It was a simplistic, yet portable implementation of an Expert System Shell. Here is the brief story of how it came to be.

When I was about 10, my grandfather (an accomplished machinist in his day) gave me his slide rule. It was a professional grade, handheld device that quickly performed basic calculations using several etched numeric scales with a central slider. I was immediately captivated by its near-magical power.

In high school, I received an early 4-function pocket calculator as a gift. Such devices were often called ‘electronic slide rules’. It was heavy, slow, and sucked battery power voraciously. I spent many long hours mesmerized by its operation. I scraped my pennies together to try to keep up with ever newer and more capable calculators, finally obtaining an early programmable model in 1976. Handheld machines that ‘think’ were now my obsession.

 I read and watched many science fiction stories, and the ones that most fired my imagination were those that involved some sort of portable computation device.

By 1980, I was building and programming personal computers. These were assembled on an open board, using either soldering or wire wrap to surround an 8-bit microprocessor with support components. I always sought those chips with orthogonality in memory and register architecture. They offered the most promise for the unfettered fields on which contemporary AI languages roamed. I liked the COSMAC 1802 for this reason. It had 5,000 transistors; modern processors have several billion. The biggest, baddest, orthogonal processor was the 16- or 32-bit Motorola 68000, but it was too new and expensive, so I used its little brother, the 6809, which was an 8-bit chip that looked similar to a 68000 to the programmer.

I spent much of the 1980s canoeing in Muskoka and Northern Ontario, with a Tandy Model 100 notebook, a primitive solar charger, and paperback editions of Asimov’s “Foundation” trilogy onboard (I read them five times). Foundations.

By the mid 1990s, Jeff Hawkins had created the Palmtm handheld computer. The processor he chose was a tiny, cheap version of the 68000 called the ‘DragonBall’. I don’t know which I found more compelling – this little wonder or the fact that it was designed by a neuroscientist. I finally had in my hand a device with the speed, memory, and portability to fulfill my AI dreams.

The 1990s saw the death of Isaac Asimov (one of my greatest heroes), but also saw me finally gaining enough software skills to implement a few Palm designs. These were mainly created in Forth and Prolog. The Mars Pathfinder lander in 1997 was based on the same 80C85 microprocessor found in the Tandy Model 100 that I had used over a decade earlier. This fact warmed my heart.

In 2001, I formed Picodoc Corporation, and released picoXpert.


Here are: the original brochure, an Expert Systems Primer, and a few slides.

It met with initial enthusiasm by a few, such as this review:

Handheld Computing Mobility
Jan/Feb 2003 p. 51
picoXpert Problem-solving in the palm of your hand
by David Haskin

However, the time for handheld AI had not yet come. After a couple of years of trying to penetrate the market, I moved on to other endeavours. These included more advanced AI such as Neural Networks and Agent-Based Models. In 2011, I wrote Future Psychohistory to explore Asimov’s greatest idea in the context of modern computation.

Picodoc Corporation still exists, although it has been dormant for many years. It’s encouraging to see the current explosion of interest in AI, especially the burgeoning Canadian AI scene. For those like me, who have been working away in near anonymity for decades, it’s a time of great excitement and hope. Today, I’m mainly into computational citizen science, and advanced technologies, such as blockchain, that might be applied to it.

Minecraft and AI: Project Malmo

In a previous post, I complimented Microsoft on their purchase of Minecraft. I ruminated on the potential for STEM and experiential learning it opens up, particularly with the addition of HoloLens and augmented reality. Recently, Microsoft announced the public availability of their Project Malmo platform that uses Minecraft to provide an interactive environment for development and testing of intelligent agents. This further illuminates Microsoft’s long-term plans for Minecraft.

In contrast to highly specific, applied AI, Project Malmo harnesses the almost unlimited exploration and experimentation possibilities of Minecraft as a research tool for general artificial intelligence. Agents can learn, comprehend, communicate, and do tasks in the Minecraft world. A mod for Minecraft provides an API for programming, and uses XML and JSON to hold representations of the world in computer memory. Agents can explore their surroundings, see and touch Minecraft blocks, take actions, and receive feedback from the environment. This enables reinforcement learning. Instead of just applying deductive, symbolic reasoning, agents can benefit from inductive (experiential) learning.

The potential benefits are compelling.

Sandbox development allows strategies and algorithms developed in the Minecraft world to be later moved to a much larger simulation environment, possibly on distributed systems and/or supercomputers. Computational agents do not require sensory augmentation or biological interfaces, since they connect ‘directly’ to the simulated world. The ability to ‘overclock’ the simulation frees agents from the limits of our time scale (they can ‘live’ many days in an hour of real time).

For remote sensing applications, such as planetary probes & rovers, the benefits are huge. Autonomous machines must be developed and tested before they are deployed. The time delay incurred by vast distance precludes ground-based control. By the time people on Earth receive video from Mars and send the command, “don’t drive off that cliff”, it’s far too late. The ‘smarts’ to navigate and make decisions locally must exist in the robot. Hazardous locations, even here on Earth, also require considerable autonomous learning and decision-making.

Collaboration and comparison between people and teams is possible with a common testing ground. Minecraft can be played as an individual, but the real power is that a Minecraft world can be hosted on a server, enabling many people, wherever they are, to experience and simultaneously participate in that Minecraft world. Players, agents, landscapes & artifacts, or even a combination can all exist and interact. This is much more like the natural world.

Project Malmo encourages public participation and involvement. It’s an open architecture that invites experimentation by all. This is somewhat similar to the citizen science movement, which not only provides the benefits of ‘crowdsourcing’ to scientific research, but also enhances public understanding of scientific methods. In the modern world, a pervasive, basic understanding of artificial intelligence would be of tremendous benefit.

The ability to go far beyond the development of a single agent also opens up the possibility for social simulations of almost limitless scale. Agent-based models have long shown their usefulness in the study of social interaction, from ant colonies to human nations and cultures. One example is the simulation of ancient civilizations. This could lead to an entirely new approach to deciphering Linear A, for example. I considered a much more vast model in Future Psychohistory. In 2011, when I wrote that article, I had a standard neural network model in mind, but if I were to write it today, I’d definitely go with intelligent agents.

The concept of agency is central in many areas of life science. Relational databases are far too static for life science – Cassandra for Life Science. The proper representation of life is not tabular, but associative. Dynamic systems interact, adapt, and even learn. This is the basis of evolution. Machinery as tiny as enzymes and ribosomes, and possibly even medical nano-robots in the near future, all require much more dynamic models to study.

Project Malmo offers major benefits for teaching computer programming. Different paradigms such as procedural, functional, object oriented, declarative, and concatenative are all suitable and helpful in the construction of agents. And of course, there’s one of my favourite hobby horses – constructionism. This only adds to the already expansive use of Minecraft in education.

And for game play itself, Project Malmo can be of great usefulness. Dangerous and tedious tasks can be handed off to agents, amplifying a human player’s efforts. Tutoring and nurturing AI agents is a good use of human intelligence — and it’s a lot of fun.

AI research and Minecraft now have a powerful and resourceful champion – nice. I look forward to seeing the fruits of Project Malmo.

Learning does not mirror Teaching

An implicit assumption in most educational infrastructure is that teaching and learning are closely similar processes, perhaps even mirror images of each other. In the abstract at least, there is a transfer of knowledge from teacher to learner. It’s even possible that once a learner has assimilated enough taught knowledge, they could ‘switch polarity’ and become a teacher. No, this is not another well crafted advocacy for sweeping reform in the educational system. I am neither qualified nor motivated to deliver such a thing. I just want to say a few words in the context of some aspects of computational thinking. There are several ways to categorize programming languages: procedural, declarative, functional, concatenative, syntonic, object oriented, data oriented, etc. My point is merely this: teaching is declarative, learning is syntonic.

The ability to acquire and the ability to impart are wholly different talents. The former may exist in the most liberal manner without the latter.
– Horace Mann

When we start out, we immediately inherit a vast and exponentially expanding body of human knowledge. This is our birthright, it does not belong to gate keepers or authorities who mete out crumbs as they see fit. Academia has the task of adding to and curating this knowledge – it doesn’t own it.

The unifying term ‘education’ implies a deep connection, a yin-yang, almost mathematical sort of symmetry between teaching and learning. This symmetry is a perception that is eagerly supported by academia, it’s an intuitive and widely held view, a ‘central dogma’. There is little evidence for it, however. It should be remembered that mathematics is only shorthand for the complexity of nature. Nature is a realm of computation and evolution, and mathematics is one of the tools that enables a vastly simplified model of reality to be held in a three-pound hominid brain. It is often said that mathematical concepts such as π and the Fibonacci Sequence are seen everywhere in nature. That’s true, but they’re seen by who? Snails and daisies, or humans? The fact that we see a pattern does not necessarily mean that a ‘Deep Truth’ has been discovered. Anthropomorphizing nature is a mistake. Furthermore, it is difficult to see even a logical similarity between Plato in the olive groves of Athens and the result of many millions of years of evolution by variation and natural selection.

There is one aspect of teaching though, that is highly influential on learning. That is in a teacher’s capacity to inspire.

If you want to build a ship, don’t drum up people to collect wood and don’t assign them tasks and work, but rather teach them to long for the endless immensity of the sea.
– Antoine de Saint-Exupéry

Human knowledge may be a birthright, but the storage and delivery systems for that knowledge are subject to the laws of socio-economics just like every other industry. Papyrus, the printing press, telegraphy, telephony, electronic media, and ultimately the Internet has been the path of technology.

While not able to exactly lay out a guaranteed path, a teacher can describe the landscape, list known boundary conditions, and illustrate and clarify goals and heuristics. Teaching is therefore, a formal, objective, descriptive task. In programming parlance, it is ‘declarative’.

Learning stuff is a very different topic from teaching. We have basically the same neurology as people did way back when banging rocks together was high technology. We evolved to find food, avoid predators, and reproduce. Of course, when intelligence arrived on the scene, things became ‘non-linear’. When social behaviour and language arrived, Alice tumbled down the rabbit hole.

The smartphone is a testament to language, science, and technology, but not increased individual intelligence. In 1965, a good pocket radio had a handful of transistors. Today’s smartphone has over a billion. People haven’t gotten a hundred million times smarter in the last 50 years (at least I know that I haven’t). Buckminster Fuller’s “Knowledge Doubling Curve” goes from 100 years around 1900, to 25 years around 1950, to 1 year today, to months/weeks/days/hours? soon. Accurate predictions are difficult because human activity is now blending with machine learning, and it’s a whole new ball game. If the central dogma that teaching and learning are symmetrical ever was true, it is becoming less true with each passing year.

So how do human learners continue to even be relevant? Well, the good news is that the same evolved learning capacity we’ve always had is applicable to any level of abstraction. In fact, perhaps a serious exploration of exactly what ‘level of abstraction’ means would be a good thing for young minds. An associated idea is that ‘things’ are not of primary importance, but rather that the connections between things are. Metaphors are examples of such connections. If we can conceptualize atoms and galaxies in terms of table-top models, we have a shot at comprehension. Also, people can learn on their own using reasoning, common sense (bootstrapping), reverse engineering, and intelligent trial and error.

The key element to learning is experience. It makes little difference how logical or well laid out an argument is if the learner has no connection to it. That’s what is meant by ‘syntonic learning’:

Educators sometimes hold up an ideal of knowledge as having the kind of coherence defined by formal logic. But these ideals bear little resemblance to the way in which most people experience themselves. The subjective experience of knowledge is more similar to the chaos and controversy of competing agents than to the certitude and orderliness of p’s implying q’s. The discrepancy between our experience of ourselves and our idealizations of knowledge has an effect: It intimidates us, it lessens the sense of our own competence, and it leads us into counterproductive strategies for learning and thinking.
– Seymour Papert


A body of knowledge is much more compelling if it can be explored subjectively, at the learner’s own speed and depth, because memorability is a big part of learning:

When you make the finding yourself – even if you’re the last person on Earth to see the light – you’ll never forget it.
– Carl Sagan

Teaching does not and cannot encompass learning:

What we become depends on what we read after all of the professors have finished with us. The greatest university of all is a collection of books.
– Thomas Carlyle

Learning is not containable in bricks and mortar or bureaucracy. It is very simply, what every human does whenever free to do so. ‘Education’ is really just another word for ‘learning’:

Self-education is, I firmly believe, the only kind of education there is.
– Isaac Asimov

It may be tempting to assert that Socratic dialectic is a suitable substitute for syntonicity. However, the former, while undeniably powerful and valuable, still involves knowledge transfer between human minds. This, by necessity, requires formalism, symbolism, and formulae. Syntonicity, on the other hand, requires nothing but a human mind exploring reality, with the aid of machine computation (algorithms) if necessary. Learning is therefore, an informal, subjective, experiential task. In programming parlance, it is ‘syntonic’.