Syntonic Scaling

“There are three things that we need to focus on as a growing organization: scalability, scalability, and scalability”. If you’re like me, you’ve heard that line at least once. The pace of modern social and technological change is staggering. Most of us struggle just to maintain balance and not be swept away by the tsunami. This is clearly visible in the nurture and guidance of a commercial startup, a new initiative in government, or really any group effort at expansion and/or adaptation. Perhaps the biggest challenge is to grow smoothly without exploding or imploding — put simply, to scale.

The trick is that we’re not honey bees. Brute force scaling is a very lossy process. Forced increase in scale can directly precipitate a reduction in scope. Perspective is lost, diversity is lost, opportunity is lost, horizons shrink, siloes are erected. Not good. Our technology is vastly more complex and complicated. A small error that might safely be ’rounded off’ in a bee colony might bring the entire system crashing down in a human-scale system. Fault tolerance and error correction must be ubiquitous and automatic.

everything fails all the time
– Werner Vogels

There are two main modes of strategic thinking: Deductive and Inductive. Deductive thought employs logic and rationality in order to understand or even predict trends and events. Inductive thought observes the emergence of the complex from the simple to do the same. Both can draw on a mix of mathematical/statistical/historical/computational analysis, although the specific mix is often quite different for the two modes. Both have the aim of making better, more informed decisions. Somewhere in between the two, like the overlapping area of a Venn diagram, is the concept of syntonicity. This is the ability to at least temporarily move one’s mindset to another place/time/perspective. It requires imagination, which is definitely, though perhaps not exclusively, a human capability.

My own area is mostly computational analysis. The vast seas of data available today long ago swamped human capabilities and now require the mechanical tools of automation. The timeline is roughly: counting to writing to gears and rotors (eg. Antikythera mechanism, Pascal calculator) to Jacquard machine (punch cards) to digital computers (vacuum tubes to transistors to Large Scale Integration to distributed computation) to quantum computers and beyond. Along the way, formal concepts were developed such as algorithms, objects, feedback (eg. cybernetics), and artificial intelligence. Many tools and languages have been developed over recent decades (then mostly out-grown), with ages and fashion passing by like scenes from an old Time Machine movie. Those of us who have enough years, enough curiosity, and enough patience, have remained engaged in the movie over the long haul. Simultaneously futurists and dinosaurs, I guess. The red plastic case of my childhood trusty pocket radio proudly boasted of its “3 Transistor” innards. Like most others, I now carry a smart phone that has a billion times that many. That’s modern life — we must scale by orders of magnitude between cradle and grave.

How can the human mind grapple with this much scaling? We evolved to find food and avoid predators in grasslands, not to hop among sub-atomic particles, swim through protoplasm, or wander intergalactic space-time. How can we explore and comprehend reality all the way from quantum mechanics to femtosecond biomolecular reactions to bacteriophages to cellular biology to physiology to populations to geology to astronomy to cosmology?
Whither scope?


Things on a very small scale behave like nothing that you have any direct experience about. They do not behave like waves, they do not behave like particles, they do not behave like clouds, or billiard balls, or weights on springs, or like anything that you have ever seen.
– Richard P. Feynman

Most programming languages are quite horrible at scaling. Scaling down is nigh-on impossible because languages have evolved to be ever bigger. Even those that claim to be lean and elegant usually require vast libraries and modules to do anything useful. Scaling up is normally accomplished by bolting on new data types, functions, and capabilities. Examples include objects, functional programming, and exotic techniques such as machine learning, thus making the language ever bigger and often more opaque. Their standardization and doctrine come at a heavy price. Much architecture and machinery is present and required ‘right out of the box’. Methodology is either implicit or actually enforced through templates and frameworks. This provides tremendous leverage when working at the human scale of activity. It squelches innovation though, when one moves too far down or up in scale.

One of the computer languages I learned and used early-on was Forth. Although I have discarded it several times over the last nearly five (!) decades, I keep coming back to it. It is a very natural almost biological language. I have also found it to be a very syntonic one. A crude definition of syntonicity is ‘fitting in’ or ‘when in Rome, do as the Romans do’. This is the key to scaling the applicability of human thought.

At its heart, Forth is incredibly tiny. It’s essentially a method for calling subroutines simply by naming them. It has a simple model for storage management and sharing: the stack. A stack is one of the oldest computational structures, perhaps going back to ancient time (the abacus, for example). However, it is brilliantly elegant. It combines elemental simplicity with tremendous functionality, a key to high scalability. The entire interpreter and compiler can be implemented in several hundred bytes. Perhaps most importantly, it can be learned, remembered, and used without a bookshelf full of manuals and references. Scaling up is unlimited and quite self-consistent; one basically bootstraps the Forth system like a growing embryo, not like a Rube Goldberg machine. Using this process, Forth can actually become fairly large and capable, see gForth. Note that scaling Forth and the underlying scale of the environment are orthogonal. The real power and utility of Forth comes from its simplicity. For example, with today’s many-core CPUs, it is possible to implement many separate, even heterogeneous, Forth engines in one computer, fully independent yet still communicating. Try that with a behemoth language or even hefty virtual machine.

Thusly, and personally, armed with the smallest possible computational toolkit, the freedom to think is restored. Researcher-programmer meetings can be cancelled. The horse can be put back in front of the cart. One can focus on grokking (grasping syntonically) the environment, physics, and inhabitants of the new scale (and thus horizons broaden again).

Of course, I’m not advocating Forth to be used for things like massive data manipulation, replacing tools like SQL, NoSQL, and beyond. Concurrency, seamless replication, automated inferencing, and vast interoperability are somewhat beyond Forth’s capability (though not entirely, suprisingly). Such tasks usually apply to teamwork. Elementary Forth is not a team language. It’s more suited to individual thought and exploration. Isaac Asimov once mused about the benefits of isolation at least in early stages. Again, we’re not honey bees.

Learning Forth is best done by using it – it’s tiny and simple to start with. If you’re more the reading type, one of the first, and best, books on Forth is “Starting FORTH” (1981) by Leo Brodie.




Building a solid foundation in the early years of a child’s life will not only help him or her reach their full potential but will also result in better societies as a whole.
– Novak Djokovic

I’m not a starry-eyed Isaac Asimov fanboy. He had his warts. But his life mattered in the big scheme of things. I like Asimovians. To me, an Asimovian is a skeptical optimist with deep scientific, historical, and sociological erudition. Some are novices, some are students, some are teachers, and some are leaders. Others are Forrest Gump types who just stumbled into a few of the right rooms, or who read “Foundation” because it was only $1 in their rural school’s bookmobile 🙂

I clearly remember the day. It was blue-sky late spring in rural Ontario, just before the end of the school year. I stood near the front of the bookmobile, on its teetering floor, with “Foundation” in one hand, and “Foundation and Empire”, “Second Foundation”, and $2 (birthday money) in the other. Each had a sticker price of $1. “Foundation” was the thinnest, which seemed unfair, as I had no choice on #1. The arithmetic was heart-breaking. As I struggled to choose between #2 and #3, their covers alternately calling to me, the book lady said, “Those are buy two get one free.” I quickly plunked down my $2 (tax was either included or exempt, I don’t remember which) and ran from the bookmobile with the trilogy like a thief in the night.

I first read those books sitting in a tree on our farm. Tales of a course for mankind stretching into the almost unimaginably distant future. A future where science, rationalism, and humanism hold dominion. The galaxy in decline, yet enlightenment rekindled. A tiny spark of hope that grows into a vast, new, near utopia.

I read them several times over my youth and young adulthood. Hidden behind text books, in waiting rooms, while camping, wherever. Like Psychohistory itself, it wasn’t just a story, it was a guide, a ‘Plan’. Several attempts have been made at bringing the Foundation Trilogy to the screen, both large and small. They failed simply because it’s too big a story to be captured on film. It only truly lives in the imagination of the reader. Perhaps someone with enough time, money, and vision will succeed some day. I hope they don’t damage it.

Asimov died in 1992, the year I lived in Vancouver. He was on my mind as I had my first inkling of Geopense. Around 2000, still inspired by Asimov, I created an AI company with one main product. I avoided non-Asimov sequels to the story, and was slightly disappointed even with those penned by Asimov himself. They seemed a bit rushed and contrived. The only later book I liked and would recommend was the final one in the series, “Foundation’s Triumph” by David Brin. It had Hari Seldon as the central character amidst an epic search for that elusive utopia.

Over the years, I’ve often wondered if the book lady had lied about the price. I like to think she had. Asimovians are a resourceful lot. That’s one of the many reasons why they’ll win in the end, and a brighter future for humanity will dawn.