“There are three things that we need to focus on as a growing organization: scalability, scalability, and scalability”. If you’re like me, you’ve heard that line at least once. The pace of modern social and technological change is staggering. Most of us struggle just to maintain balance and not be swept away by the tsunami. This is clearly visible in the nurture and guidance of a commercial startup, a new initiative in government, or really any group effort at expansion and or adaptation. Perhaps the biggest challenge is to grow smoothly without exploding or imploding — put simply, to scale.
There are two main modes of strategic thinking: Deductive and Inductive. Deductive thought employs logic and rationality in order to understand or even predict trends and events. Inductive thought observes the emergence of the complex from the simple to do the same. Both can draw on a mix of mathematical/statistical/historical/computational analysis, although the specific mix is often quite different for the two modes. Both have the aim of making better, more informed decisions. Somewhere in between the two, like the overlapping area of a Venn diagram, is the concept of syntonicity. This is the ability to at least temporarily move one’s mindset to another place/time/perspective. It requires imagination, which is definitely, though perhaps not exclusively, a human capability.
My own area is mostly computational analysis. The vast seas of data available today long ago swamped human capabilities and now require the mechanical tools of automation. The timeline is roughly: gears and rotors (eg. Antikythera mechanism, Pascal calculator) to Jacquard machine (punch cards) to digital computers (vacuum tubes to transistors to Large Scale Integration to distributed computation) to quantum computers and beyond. Along the way, formal concepts were developed such as algorithms, objects, feedback (eg. cybernetics), and artificial intelligence. Many tools and languages have been developed along the way (then mostly out-grown), with ages and fashion passing by like scenes from an old Time Machine movie. Those of us who have enough years, enough curiosity, and enough patience, have remained engaged in the movie over the long haul. Simultaneously futurists and dinosaurs, I guess. The red plastic case of my childhood trusty pocket radio proudly boasted of its “3 Transistor” design. Like most others, I now carry a smart phone that has a billion times that many. That’s modern life — we must scale by orders of magnitude between cradle and grave.
How can the human mind grapple with this much scaling? We evolved to find food and avoid predators in grasslands, not to hop among sub-atomic particles or wander inter-galactic space. How can we explore and comprehend the world all the way from quantum mechanics to femtosecond biomolecular reactions to bacteriophages to cellular biology to physiology to populations to geology to astronomy to cosmology?
Things on a very small scale behave like nothing that you have any direct experience about. They do not behave like waves, they do not behave like particles, they do not behave like clouds, or billiard balls, or weights on springs, or like anything that you have ever seen.
– Richard P. Feynman
Most programming languages are quite horrible at scaling. Scaling down is nigh-on impossible because languages have evolved to be ever bigger. Even those that claim to be lean and elegant usually require vast libraries and modules to do anything useful. Scaling up is normally accomplished by bolting on new data types, functions, and capabilities. Examples include objects, functional programming, and exotic techniques such as machine learning, thus making the language ever bigger and often more opaque. Their standardization and doctrine come at a heavy price. Much architecture and machinery is present and required ‘right out of the box’. Methodology is either implicit or actually enforced through templates and frameworks. This provides tremendous leverage when working at the human scale of activity. It squelches innovation though, when one moves too far down or up in scale.
One of the computer languages I learned and used early-on was Forth. Although I have discarded it several times over the last nearly five (!) decades, I keep coming back to it. It is a very natural almost biological language. I have also found it to be a very syntonic one. A crude definition of syntonicity is ‘fitting in’ or ‘when in Rome, do as the Romans do’. This is the key to scaling the applicability of human thought.
At its heart, Forth is incredibly tiny. It’s essentially a method for calling subroutines simply by naming them. It has a simple model for storage management and sharing: the stack. The entire interpreter and compiler can be implemented in several hundred bytes. Perhaps most importantly, it can be learned, remembered, and used without a bookshelf full of manuals and references.
Thusly armed with the smallest possible computational toolkit, the freedom to think is restored. Noses can be raised from the programming manuals. Researcher-programmer meetings can be cancelled. The horse can be put back in front of the cart. One can focus on grokking (grasping syntonically) the environment, physics, and inhabitants of the new scale.
Of course, I’m not advocating Forth to be used for things like massive data manipulation, replacing tools like SQL or NoSQL. Concurrency, automated inferencing, and vast interoperability are somewhat beyond Forth’s capability (though not entirely, suprisingly). Such tasks usually apply to teamwork. Forth is not a team language. It’s more suited to individual thought and exploration.