Institutional Citizen Science

Personnel motivation and esprit de corps have always been important in any organization. Citizenship and Corporate Social Responsibility (CSR) have sometimes become as important to the brand as the trade name or logo. For many reasons, it is wise to consider institutional citizen science.

Traditionally, participation in citizen science projects has been done at the individual level. That is, observations (e.g. ecosystem projects), identifications (e.g. galaxy classification), and computational contributions (e.g. protein folding simulation) have been made by individuals. People sometimes join teams of like-minded or geographically grouped participants, and their efforts are often reported or tallied as a team. However, there has been very little organizational-level participation. There are many potential benefits of institutional citizen science, and in particular the computational variety.

Employee engagement can be improved. IT staff can provide leadership in setting up the required infrastructure, even with minimal initial effort. They can provide ongoing maintenance, expansion, and IT efficiency improvements. They can learn a lot along the way. Communications staff can prepare and disseminate any required internal information, and again, learn a lot along the way. Seeing the daily progress of the organization’s participation can engage everyone. A well run project can advance the cause of a more inspired, invigorated, enthusiastic, energized, and empowered staff with more of a sense of ownership for their organization. Progress in citizen science projects could be shown in a dedicated section of the organization’s intranet. Perhaps even a big screen could be located in common areas such as the lobby or cafeteria to show live content (e.g. simulations, animations, numerical results) and promote a sense of community. Management can simultaneously learn a lot about concepts such as computing as talent, cognitive computing, ‘gamification’, and integrating technology.

Institutional culture can benefit. Loyalty and pride in the institution are valuable assets. Leadership in ‘doing good’ is a strong motivator and has been a cornerstone of CSR for decades. There are opportunities for recognition and appreciation of both individual and team efforts. Both individuals and groups can suggest which projects to participate in from the large and growing menu available. Citizen science projects offer opportunities for people to think outside the box, to step out of their comfort zone, to consider more diverse possibilities, to form new partnerships, and to take the long view. A culture with all specialists and no generalists needs fresh air to breathe. The study of nature can offer a welcome break from politics and policy considerations, immediately and easily putting everyone on the same level: an observer.

Institutional innovation can benefit. Although tempting (when myopically studying spreadsheets) to farm everything out to consultants and sub-contractors, in-house innovation can be extremely valuable. Skunkworks (small teams for experimental projects) and Bimodal IT (production and exploration as separate yet symbiotic streams) can provide huge benefits, and citizen science is a natural skunkworks project. Notions of siloed knowledge and operation can be skeptically reviewed and perhaps even challenged without having to disrupt the larger organization. ‘What if’ models can move from pure theory to at least partial practicality. Distributed infrastructure is one example, and computational citizen science is all about distributed processing. Owning innovation, moving it vertically through an organization at the appropriate pace, and finally delivering it to the world can generate and cultivate innovation itself as an asset. Like the old proverb says: “Give someone a fish and they’ll eat for a day. Teach someone to fish and they’ll eat for a lifetime.”

Internal HR can benefit. Management and leadership talent can be identified and incubated in a non-threatening, non-competitive domain. Understanding the internal talent ecosystem is essential for the health and future scalability of any organization.

Governments can draw upon citizen science as well. On a regional or national scale, public policy can both encourage and benefit from an actively engaged citizenry. In-depth issues such as climate change, demographic change, disruptive technologies such as Artificial Intelligence (AI) and Automation, and general scientific and digital literacy become much easier to create a dialog around if the communication and participation is two-way. Agile, multi-disciplinary, multi-lingual, and age-spanning efforts are all increasingly valuable. A sparse and diverse population can come together on a unified effort without sacrificing, and perhaps even benefiting from, that diversity.

In the coming age of AI and Computer-Generated Imagery (CGI), there will be a tsunami of hoaxes, spoofing, and fake news. At best, mistaking such things for real content is embarrassing. At worst, these could represent an existential threat. The surest defense against these dangers is scientific literacy, both in the general population, and particularly within the organization. The first step in avoiding a trap is knowing of its existence.

There is also of course, a direct benefit to scientific research. Citizen science is not a replacement for academic research, it’s an adjunct. Projects run under the supervision and purview of scientists benefit in several ways from citizen participation. There is an increase in resources, harnessing more labor (e.g. collecting data), human intelligence (e.g. categorizing images), and computational power (e.g. crunching numbers for simulations). There is an increase in scope, drawing from a wider pool of time, space, and experience. There is also an increase in public awareness of scientific research and methodology. Scientific research is its own reward, and is worth defending.

Finally, there are the usual advantages that come with economy of scale. By gathering the efforts of many individuals under one roof, much wasteful duplication is avoided. Looking at computational citizen science in particular, instead of having members individually setup and run their own ‘crunching’ computers at home, they can participate at the workplace or remotely (perhaps using the ‘cloud’). The performance per watt of one big machine is much better than many smaller ones. It’s also a way for social skills to be advanced over isolation in the internet age.

Organizational learning requires interaction and participation. Growth requires innovation. Basic scientific literacy improves objectivity and comprehension of a complex world. Computational thinking improves problem solving skills. Improved use of reason and logic for analytical thinking, deduction, and inference might result. These skills and attitudes may not be easily quantified or measured, yet they surely benefit the organization, especially in the long term. Learning becomes an organizational task and goal, resulting in a more knowledgeable enterprise as a whole. Improvements in individual skills, together with deeper and wider internal communication go a long way toward that end. Diversity of learning styles, participation levels, and paces can be accommodated. The best organizations assign the ends, not the means.

“If you want to build a ship, don’t drum up people to collect wood and don’t assign them tasks and work, but rather teach them to long for the endless immensity of the sea.”
– Antoine de Saint-Exupéry

Forth: A Syntonic Language

Educators sometimes hold up an ideal of knowledge as having the kind of coherence defined by formal logic. But these ideals bear little resemblance to the way in which most people experience themselves. The subjective experience of knowledge is more similar to the chaos and controversy of competing agents than to the certitude and orderliness of p’s implying q’s. The discrepancy between our experience of ourselves and our idealizations of knowledge has an effect: It intimidates us, it lessens the sense of our own competence, and it leads us into counterproductive strategies for learning and thinking.¹
– Seymour Papert

People are sentient and we know that we are sentient. We are also social creatures. The human mind is aware of ‘agency’, in both itself and others. Since the beginning of recorded history, and probably long before, we have even ascribed agency to objects and events in the natural world. Once we attain consciousness, it becomes powerful and efficient to reuse this mind machinery to understand the world around us in familiar terms. Mirror neurons in the human brain enable us to read or infer intention in others. They do not merely form patterns of thought – they reflect them.

Syntonic is a word sometimes found in music theory, but in psychology, syntonicity means having a strong emotional connection with the environment. It is understanding something through identification with it. It is achieved by putting oneself “in another’s shoes”, and it is key to human learning. One form of this is imitation, and children do this all the time, with everything from trees and animals to teapots and airplanes. Another form is metaphor, where an established understanding is substituted for a new and unfamiliar thing. See Ancient Metaphors for examples from the IT world.

When we mature, many of us discover computer programming and would like to use the same built-in learning method that allowed us to develop ‘common sense’. However, most programming languages and their text books are so abstract, formal, and doctrinal, that it’s nigh-on impossible to get very far this way. As a result, computational thinking recedes off into the ivory tower, and the world is much worse off because of it.

This type of learning is well suited to therapeutic applications due to its natural, informal flow. Starting with the ‘hard wired’ facilities we all have, ‘bootstrapping’ (discussed below) is a very individual and non-doctrinal process.

An interesting and more in-depth exploration of syntonicity in programming languages was written by Stuart Watt.²

I have found Forth to be a very syntonic language.

Syntax and structure are the first face that we see of a new language. Some are fairly natural (e.g. BASIC), most are verbose and intricate, and some are at the extreme end of mathematical formality (e.g. Haskell). Forth is positively Spartan by comparison, with the majority of keywords being three letters or less, and the ‘: … ;’ pair providing most of the structure and context. Some long time users of Forth do not even like its syntax and quickly override keywords (a trivial task in Forth), inventing their own to craft the language to be more in tune with their way of thinking. Ironically, this makes them love Forth more, not less. It is an extremely simple language. At its core, Forth is just a mechanism to reduce the cost of calling a subroutine to merely referring to it by name. But this is not what makes Forth syntonic.

“syntonicity is not directly a property of the syntax, or even the semantics or pragmatics of a language, but a property of the stories that we tell about it” (Watt p.5)

Forth is a stack-based, threaded, concatenative language with a tightly coupled interpreter and compiler. To explain these terms, and thus ‘tell a story of Forth’, we’ll use a metaphor:
Forth – a hiker (named ‘Arkady’ for convenience)
Stack – her backpack
Thread – the trail
Interpreter – her mind (neurology)
Compiler – her logbook

Notice that there is no clear distinction between traveler (Forth) and journey (program). Arkady’s journey is of more interest to her than is her destination. She is open to distraction, side-trips, and unexpected eventualities. Even mistakes are sometimes learning opportunities. This is the way of Forth – less focus on design and protocol, more on exploration, discovery, and emergence.

Life is not a problem to be solved, but a reality to be experienced.
– Soren Kierkegaard

While most languages are applicative (functions are applied to data), Forth is concatenative, which is a much more natural and emergent process. There are also few safeguards in Forth; the entire environment (machine) is wide open. This freedom is anathema to authorities of language design, which might partly explain why Forth-like languages are not more widespread. It’s much safer to clone an application from an existing template and/or framework than to invent a new one. Of course, the result is a copy, not an original. Incidentally, in much research funding, the outcome must almost be predicted along with careful metrics and timetables (ask anyone who has prepared a grant application). This squelches creativity and serendipity. It also puts a premium on positive results over negative (and perhaps greatly informative) ones. In fairness, just so we don’t make the mistake of assuming that everyone agrees that doctrine is a bad thing, there has been a large and successful effort to come up with a more standardized, ANS Forth.

Her backpack (the stack) is a readily available, last in, first out (LIFO) pile of items that she adds or removes as required. As a good hiker, Arkady is always aware of what’s in her backpack. She keeps often-used items (tools, map, compass) near the top. Longer term necessities (food, tent) are towards the bottom.

She walks along the trail (thread), step-over-step. Some steps are short and obvious, like avoiding big rocks and snakes. Some are more complex and subtle, such as crossing streams and staying downwind of bears. Many are actually composites of smaller steps. Some require more items from the backpack than others. At each step, she has the same backpack, although the exact contents may vary. She may sometimes just take the top few items, drop the pack, and head off down an interesting side trail. This is how a Forth program is built up. New ‘words’ are written to accomplish tasks. These words are then threaded together as higher level words invoke a series of them. Eventually, a single, highest level word is the starting point for the whole program, which is just a chain of subroutine calls.

Using all of her neurology, from senses to brain to common sense, logic, and rationality, she ‘interprets’ her world. She learns new facts and skills as she walks, gradually ‘bootstrapping’ a deeper understanding of her environment and more powerful and efficient use of resources (food, energy, time). To learn if and how something works, she prods it and observes the results. If she finds an unfamiliar mineral, she could bring whatever tools she is carrying to bear. Or, she could invent an entirely new tool. She could craft a crude microscope from the magnifying lenses and rolled up map in her backpack. If she has time, a mass spectrometer may be suitable. In Forth, quick tests and trials are inexpensive and easy. There is no odious edit-compile-run loop to get in the way. For example, to find what OVER does, put a few numbers on the stack, type OVER, and examine the stack to see the result. Stumbling around with her nose in manuals and books would perhaps cause Arkady to miss a fossil or patch of berries. It is also a good way for her to become lunch for a mountain lion.

A word about bootstrapping. Sometimes, when a skill is applied to produce a result, new knowledge (or a new tool) is gained in the process. This new knowledge opens the door to skill-set improvement or refinement, which enables hitherto impractical or unseen possibilities. In turn, even more new knowledge is obtainable, and the process repeats. The result is flint knife to bow and arrow to alchemy to chemistry to electronics to spaceflight. Bootstrapping is the essence of Forth. Sadly, multitasking, object-orientation, dissociation and data-hiding, and that ultimate chestnut, ‘leveraging’ are all much more in vogue today. Multitasking in human thinking is greatly overrated. Deeper, syntonic thinking enables more creativity and innovation. If Forth was more widely used and understood, you might well be reading this on Mars right now. If it had been invented earlier, you might be reading this on Triton.

From time to time, Arkady considers new observations, knowledge, thoughts, and ‘steps’ to be well-established and important enough to jot down in her logbook. This log serves as a permanent record of all she has learned on her journey. It is in fact, a purified, corrected, ‘frozen concentrate’ of her trail.

Arkady eventually arrives at the end of her journey. It may have been cut short for various reasons or maybe her destination is different (maybe even better) than she originally intended. This is a function of her curiosity as much as weather or circumstance. In any case, she still has her logbook, for her or others to use as a future guide (be they human or machine).

Some would say that it’s wrong to argue for Forth going forward. After all, it’s an old language, and was originally created for control of machinery in an era of sparse computational resources. As programs and systems grow ever-more complex, capable, and intelligent, such a language has outlived its usefulness. However, I must disagree. The future is not just about frameworks, big data, and augmented reality. It will be at least as important to build new, ad hoc, ‘micro’ systems rapidly and locally in a time of dizzying acceleration of technology. Creating, testing, and problem solving starting from first principles will always be valuable.

Understanding the world requires thinking in new languages. Forth is to computer science what math is to physics. Computational thinking, syntonicity, scalability, learning-by-doing*, and good old human common sense are not headed for the dust bin of history any time soon. Even basic game narratives are best written in simple languages. Here are some further thoughts on learning.


I am, however, perhaps not a very good constructionist, try as I might. I once tried (unsuccessfully) to acquire a minor Logo implementation. I believe in learning by doing something else! A wide, diverse search is often the most efficient (just ask the ants).
(1) Papert, S. (1980). Mindstorms: Children, Computers, and Powerful Ideas. New York: Basic Books
(2) Watt, S. (1998). Syntonicity and the psychology of programming – Psychology of Programming Interest Group 10th Annual Workshop.

The Asimovian Epoch

Centuries past are like the long night, a mysterious dream world. The modern, networked Earth is like daybreak, a world-wide awakening.

The fifty years 1945-1995 could be considered as a sort of ‘twilight zone’ between the two. The bulk of thinking and research done during this period is too new to be of any great historical or even deep nostalgic interest. Yet it is too old to be treated as current knowledge in the Internet age, where freshness and novelty are valued. In addition, we are still inside several of the epic transformations that began in this time. It’s difficult to appreciate and comprehend a revolution when you live within it. Our blind spot for this period is a great tragedy.

This period could be called the ‘Asimovian Epoch’. Here’s why.

Isaac Asimov (1920-1992) was perhaps the last great polymath. He was a biochemistry professor and author, publishing (writing or editing) over 500 books (covering 90% of the Dewey Decimal Classification system) and many thousands of shorter works. That’s an average rate of over one book per month over his long career – one of the greatest outbursts of creativity in history. His favorite topics included history, physics, comedy, and English literature. He was quite arguably the greatest science fiction writer of all time. He, Robert Heinlein, and Arthur C. Clarke are known as ‘The Big Three’ of the Golden Age of science fiction.

Asimov was a nexus of 20th century thought.

He was strongly influenced by Greek and Roman classics, Shakespeare, Gilbert & Sullivan, and also by contemporaries such as Kurt Vonnegut, Heinlein, Clarke, Harlan Ellison, Carl Sagan, and Marvin Minsky. He in turn influenced the likes of Paul Krugman (Nobel Prize in Economics), Elon Musk (SpaceX, Tesla), most science popularizers (happily now becoming too numerous to list), virtually all science fiction writers, and of course millions of readers.

Optimism came early to Asimov, perhaps because he spent so much time as a child in his father’s candy store. Many were initially drawn to science as teenagers by his optimistic and compelling vision of science and technology. Given the advances that 20th century science brought, such as space exploration, computation, automation, microbiology, etc, our debt to him on this account alone is inestimable. His masterpiece, Foundation, has deeply influenced many others, including me: Future Psychohistory

It has become fashionable for some to be suspicious of science and even liberal education. This was a trend that Asimov fought tirelessly against all his life. He warned against the false notion that democracy means that ‘my ignorance is just as good as your knowledge’

He went even further, exploring the frontiers of individuality and self-education:

Self-education is, I firmly believe, the only kind of education there is.

Some even credit Asimov with predicting the Internet, especially its use for personalized education. He envisioned a world in which people could pursue their own interests, unfettered by the standard classroom, open to new ideas and serendipity.

Ironically, the very technology that he hoped would spur individuality and diversity has the opposite effect at times. One example is digitization and archiving of published material. There was a time when books were expensive, highly personal possessions that stayed with the owner for a lifetime, and were even sometimes handed down across generations. People would store letters, pressed flowers, and other snippets of life between their pages, and write personal margin notes. Books were not just repositories of language, they were time capsules and valuable historical accounts. However, the best book to scan is a pristine, un-personalized volume. Once it is scanned, future researchers use this one version increasingly exclusively. Individuality and diversity are squelched.

A pervasive basic understanding of science was Asimov’s constant goal. He often took the opportunity to describe science not as a storehouse of absolute truth, but rather as a mechanism for exploring nature. Science is not a dogmatic belief system. Learning happens when one asks questions of the universe and then listens attentively and objectively with the ears of science. He wrote eloquently on scientific models, logic, and inference. An example is this essay.

Of course Asimov is best known for his robot stories, and his ‘Three Laws of Robotics’. In 2010, the U.S. Congress established National Robotics Week in his honor. It’s possible that he saw artificial intelligence as a way for intellect to exceed the bounds of human bias, subjectivity, and mortality.

I never met Isaac Asimov, but I have spoken to several people who did have that pleasure. Each one related the profound impression he made on them. He died in 1992, but his books continued to be published posthumously. By the end of 1994, the World Wide Web Consortium had been founded at MIT. The dramatic rise of the Web marks the end of the Asimovian epoch.

Academia can sometimes degenerate into a vicious paper chase, with Ego in the front seat, Science in the back seat, and Humanity locked in the trunk. Asimov famously said,

The most exciting phrase to hear in science, the one that heralds new discoveries, is not ‘Eureka!’ but ‘That’s funny…’

Sadly, with hyper-specialization and the loss of interdisciplinary thinkers like Isaac Asimov, the catch-phrase of science has increasingly become: “Oooh! Shiny!”

Minecraft and STEM

Microsoft recently bought Mojang AB, the Swedish company that makes Minecraft, for a reported $2.5 billion. It was not Microsoft’s first attempt to woo Mojang, and many thought they would never be able to close the deal.

In 2009, Markus Persson (‘Notch’), an unknown independent Swedish game developer, released the first version of his creation, Minecraft. It was not a normal ‘game’. It had no defined sequence, no milestones or levels of achievement, and most curiously, no end goal. What it did have was a charming, simplistic world in which a player could ‘mine’ and ‘craft’ for hours (or days) (or weeks) according to their own desires. In fact, the game could be played without any other mobile agents (‘mobs’) at all, leaving an entire world to explore and terraform with no limits. In 2010, maps became truly infinite, growing automatically as the player’s avatar (‘Steve’) approached an edge. The fact that no one else ever has or ever will live in, or even see, your privately created world gives one a compelling, almost haunting feeling.

It reminds me of the 1967 Star Trek episode “Arena”. Captain Kirk is locked in mortal combat with a frightening alien creature on a deserted and barren world. He must find or craft weapons from whatever materials he can find. He finally succeeds by assembling the raw materials for a crude cannon. We have all imagined ourselves cast back to a more primitive environment, where our own imagination and gumption are nearly all we have at our disposal. Interestingly, the goal of many advanced Minecraft players is to populate their world with coffee shops, restaurants, and villages.

Over time, Minecraft has expanded to include many characters, tools, and landscape features, but it has always maintained its minimalist look and feel. That’s the secret to its success – this is a world that is understandable to everyone, from children right through to seniors who grew up before the computer age. While the richness and physics of this world have evolved, the quaint, blocky look of it hasn’t.

The game has since sold many millions of copies, inspired LEGO sets, books, films, and has a huge and active fan base on YouTube. Conventions are held regularly, and Notch achieved rock star status years ago. Minecraft is the very definition of viral Internet success, massively disrupting traditional corporate and marketing models. Without shrink wrap or advertisements, it took over the gaming world.

So, many folks were left scratching their heads over this Microsoft acquisition. On the surface (pardon the pun), it seems like an unlikely fit. However, I don’t think that at all. I see it as the happy marriage announcement of two dear old friends.

Around the turn of the millennium, I was a corporate trainer, with MCT & MCSD certifications (the ‘M’ is for Microsoft). I often found myself sticking up for Microsoft amidst the disdain of my colleagues. I would use Bill Gates’ early years as an example of out-of-the-box thinking, like the Tandy Model 100, the first true notebook, and the last machine that he programmed personally. The best Christmas present I ever got was DOS 3.0, and QuickBASIC was one of the best languages I ever used (and I’ve used 50 or more). Maybe Apple and others were more stylish, but it was Microsoft that really put the excitement into personal computing.

Microsoft hopes to gain leverage in the mobile and youth markets. I cannot think of any better vehicle than Minecraft. Far from seeing it as the death knell for Minecraft, I see Microsoft’s acquisition as a renaissance and a real chance for positive, imaginative, and compelling games to get the much larger piece of the pie that they deserve.

“If you talk about STEM education, the best way to introduce anyone to STEM or get their curiosity going on, it’s Minecraft”
– Microsoft CEO Satya Nadella

Also, there’s Microsoft’s augmented reality headset – HoloLens. There appears to be a plan coming together:
Minecraft + HoloLens + ElementaryProgrammingSkills = Syntonic Learning

If this is what Microsoft has in mind, it could be the most disruptive technology yet. Well done Redmond, and best wishes to Notch in all his future endeavors.

Morse Code and Minecraft

Several clever machinations have been implemented for the game Minecraft. These often include circuits and devices that mimic their real-world counterparts. They even include complete calculators and computer sub-systems (eg ECC memory and ALUs). Perhaps one of the more compelling inventions is the addition of Morse code operations. There are numerous implementations, all of which are interesting and a few are excellent.

I have a particular fondness for Morse code. See “Morse Code and Me”.

Often in the 18th and 19th centuries, ‘semaphores’ or codes were played with in an attempt to communicate over longer distances than the human voice could carry. With the rapid development of electricity and magnetism theory and practice in the early 1800s, the time was ripe for the telegraph. In 1837, Samuel Morse et al demonstrated a working telegraph setup in America (there were earlier versions in Europe). The principle was that codes tapped on a mechanical key would be converted to electrical signals, then sent along a wire to another key at the other end which would receive (repeat) the code. Since this transmission happened at almost the speed of light, communication at a distance became essentially instantaneous. Later refinements to and extensions of this principle led to the telephone, radio, television, and of course the Internet.

Why is Morse a good fit for Minecraft ? Basically, it is in keeping with Minecraft’s quaint and minimalist nature. Crafting rudimentary equipment and stringing telegraph wires matches this game (it’s simple and fun). Minecraft already has support for elementary circuits and power via “Redstone”. Morse code is at home in a rustic world of dirt, stone, and wood. You can’t get much more binary and ‘blocky’ than a string of dots and dashes. In contrast, popup chat windows are harshly anachronistic. Telegraphy also fits in well with Minecraft’s ‘bootstrapping’ model; you don’t absolutely require it, but it speeds gameplay and opens up new possibilities once the technology is achieved.

Educational use of Minecraft benefits greatly from this bootstrapping model. Finding, crafting, and mastering tools and gadgets, and thus learning as you go is half the story. Exploring endlessly rich and surprising 3D worlds is the other half.

Also, learning a language is never a bad thing, and Morse code is one of the simplest ‘languages’ there is. Telegraphy is laden with historical significance, another opportunity for learning.

What are the possible uses for telegraphy within Minecraft ? First, as I said, it’s a more primitive alternative to text chat windows in multi-player modes. Next, it would enable the transfer of information across distance, for example maybe a crude sort of ‘fax machine’ for maps, books, signs, etc. Next, automated beacons could be built, perhaps they might even show up on maps. Next, remote control of processes and apparatus would extend the player’s reach. Next, short messages could be written right into the landscape (for example dirt for dots and stone for dashes). How about time capsules or ‘notes to self’ ?

So, Morse code would fit very well into the Minecraft world and mindset, with many novel and fun uses. The educational benefits of this addition would be welcome and easily realized.

Concatenative Biology

There are several ways to categorize programming languages. One is to distinguish between applicative and concatenative evaluation. Most languages are applicative – functions are applied to data. In contrast, a concatenative language moves a single store of data or knowledge along a ‘pipeline’ with a sequence of functions each operating on the store in turn. The output of one function is the input of the next function, in a ‘threaded’ fashion. This sequence of functions is the program. Forth is an example of a concatenative language, with the stack serving as the data store that is passed along the thread of functions (‘words’). “Forth is a simple, natural computer language” – Charles Moore, inventor of Forth.

One of the great advantages of concatenative languages is the opportunity for extreme simplicity. Since each function really only needs to know about its own input, machinery, and output, there is a greatly reduced need for overall architecture. The big picture, or state, of the entire program is neither guiding nor informing each step. As long as a function can read, compute, and write, it can be an entity unto itself, with little compliance or doctrine to worry about. In fact, in Forth, beyond the stack and the threaded execution model, there’s precious little doctrine anyway! Program structure is a simple sequence, with new words appended to the list (concatenated). The task of the programmer is just to get each word right, then move on to the next.

In nature, the concatenative approach is the only game in town. Small genetic changes occur due to several causes, random mutation being one of them. Each change is then put through the survivability sieve of natural selection, with large changes accumulating over large time scales (evolution). (Evolution is active across the entire spectrum of abstraction levels. Hierarchies emerge naturally, not through doctrine or design.) Concatenation is the way by which these small changes are accumulated. Much of the epic and winding road of human evolution is recorded in our DNA, which is billions of letters long.

This process can be seen happening right now in molecular biology. Consider the ribosome. This is the little machine inside a cell that reads a piece of RNA (a chain of nucleotides) and translates it into a protein (a chain of amino acids). There is no Master Control Program assigning names, delegating work, and applying functions. There is only a concatenative program, built up over the ages by evolution. So, basic life uses a fairly powerful and successful form of computation: DNA for storage, RNA for code, ribosome for computing, protein for application.
(and natural selection for testing) 🙂

We flatter ourselves when we talk of our ‘invention’ of levers, gears, factories, and computers. Nature had all that stuff and much more long before we ever came down from the trees. Math, engineering, and science are great not because of their products, but rather because they enable 3-pound hominid brains to explore nature and ponder the possibilities.

Ancient Metaphors

New nations often use comparisons and metaphors to build a more familiar and solid framework or ‘story’ using the past, especially to evoke visions of ‘glory days’. One example is the symbolism of early America harkening back to ancient Greece and Rome. The founding fathers often learned Greek, Latin, and much of the history of those cultures in their formative years. We can still see that influence today in the architecture of capital government buildings.

It should come as no surprise then, that the same thing happens with computers and the insatiable need for new descriptors and naming conventions. Much of this framework has sprouted up within the last few decades. That doesn’t leave enough time for growing familiarity organically. Past language and literature provide a rich crop ready for immediate harvesting.

Here are two examples:

Beowulf

This epic poem is one of the earliest known pieces of vernacular English literature. The story is about a brave hero who slays monsters in the north of Europe sometime around the 6th century. It is a tale of great strength, difficulties overcome, and distances traveled. The fame of this poem (especially the Old English version) has grown steadily over the last two centuries. It has even been reproduced in literature, popular media, and the cinema. This provides a ready-made metaphor that can be used by something new.

In 1994, Thomas Sterling and Donald Becker built a computer at a contractor to NASA. It was the prototype for what would become the “Beowulf Cluster” architecture. This type of machine uses modest, commodity, even oddly matched PCs connected together to form a single parallel computer. It has acquired a rather stoic and renegade reputation in academia, where it has been used to gain entrance to the exclusive supercomputer club ‘on-the-cheap’. The name is said to have been inspired by the line in the epic poem: “thirty men’s heft of grasp in the gripe of his hand”.

Inferno

Another old epic poem is Dante Alighieri’s “Divine Comedy”. It is an allegory for the soul’s travels and a map of 14th century spirituality. It is as well a multi-dimensional exploration of sensitive/rational human nature and psychology. Again, this work is one of the most studied pieces of literature in history. It is another ready-made treasure trove of meaning and metaphor.

The first of its three parts is “Inferno” (hell). It is graphic and horrifying. It is also strangely open-minded and thoughtful in that figures from classical literature are woven into a Christian narrative. These include Socrates, Plato, and indeed Virgil, Dante’s guide. Perhaps the most apt adjective for “Inferno” is: deep.

“Inferno” was recently used as a template for a “Software Inferno” describing “sinners against software” and their torments (1). By using this template, the author begins with an extensive toolbox of metaphors. Of course, neither “Software Inferno” nor this short blog post should be used as an authoritative representation of Dante. There are plenty of good academic sources for that.

Instead of Virgil, Bell uses the Countess of Lovelace as our guide. More commonly known as Ada Lovelace, she was a mathematician and the daughter of Lord Byron, and is widely considered as the world’s first computer programmer for her work with Charles Babbage. She leads our hapless traveler down through the Inferno, towards the Earth’s core, along a roughly cone-shaped set of circular slices:

  • ANTE-INFERNO developers who couldn’t decide whether to use their skills for good or evil, denied entry to both Heaven and Hell, doomed to eternal obscurity
  • LIMBO developers born too early in history to know of proper software engineering
  • LUST developers who chose power and fame over commitment and responsibility, they gave their time to their computers instead of their family, they sought the spotlight, now they are blinded by its glare for eternity
  • GLUTTONY developers with human heads and pig bodies, IPO and stock bubbles had led them into wasteful, gross over-consumption
  • GREED IT types who gathered every crumb of profit for themselves, neglecting to, share anything with the people who got them there, condemned to a cacophony of falling pennies forever
  • ANGER realm of vitriol and negativity, with inhabitants broiled by burning paper, the only water available had been poisoned just like their own workplaces
  • HERESY developers who knew of good software engineering, yet refused to use it, now chaotically and aimlessly going their own way for all time
  • VIOLENCE developers who had inflicted malware, viruses, and phishing on the public, this pestilence now torments them in turn, and even sleep offers no escape
  • FRAUD techno-hucksters, advocates, evangelists who had foisted crappy software, standards, and models upon desperate users who longed for guidance, they now dwelt in sewage for all time
  • TREACHERY those who had enabled and encouraged this fraud, doomed to keep shoveling this detritus forever

Although intended for an audience of computer developers, the macabre humour is widely understood thanks to the template provided by Dante.

Here are a few other names for computer technology borrowed from the past:
Apollo, Athena, Delphi, Hercules, Hermes, Homer, Hydra, Jason, Janus, Merlin, Midas, Oracle, Odyssey, Phoenix, Phoebe, Pegasus, Sisyphus, Tantalus, Troy, Ulysses, Valhalla, Valkyrie, Zeus
… and of course a whole host of planets, moons, stars, and constellations.

(1) Bell, A.E. (2014). The Software Inferno. Communications of the ACM, 57(1), 48-53.

Morse Code Therapy

Some years ago I had speech therapy for dysarthria. The therapy consisted mainly of breathing and vocal exercises, aimed at strengthening and enhancing the many components of physiology involved in speech. It was very successful and the professionals who helped me have my eternal gratitude. Along the way, I noticed that I was having some small trouble with initiating speech. Although a thought would form quickly and lucidly in my mind, actually triggering the vocal machinery to convert it into spoken words was sometimes problematic. It was like using a computer with a slow, damaged printer as the output device. Having been a computer designer and programmer for many years, it seemed to me that what was needed was a second output device, perhaps even a full two-way communications channel.

It’s certainly nothing new for people to compensate for one shortcoming by drawing upon one or more other senses or faculties. For example, there is Braille for the visually impaired and sign language for the hearing impaired. Even for someone without any such disability, learning a new language has profound benefits.

“To have another language is to possess a second soul”
– Charlemagne

If I could open such a second communications channel, I could link my mind to the outside world while bypassing speaking altogether. This would allow me to work on both problems, the dysarthria and the imperfect speech initiation, as two processes in parallel (my inner geek again). However, another spoken language would not help here, and sign language required another person for learning and practice. Additionally, I wanted to keep it in the audio realm because working on neural audiology would reinforce my other therapy. I needed something that was audio-based, flexible enough for real communication, and simple enough to learn quickly, perhaps even with automated help.

Back in high school, I had belonged to the amateur radio club, mainly due to my interest in electronics. We built circuits, from oscillators to amplifiers to modest radio transmitters and receivers. The really keen members of the club (which did not include me) went so far as to obtain their amateur radio license. One of the requirements for this was proficiency in Morse code, that dih-dah-dih beeping from a bygone era sometimes featured in movie plots from the Old West to air and sea.
Aha – I had found the answer.

I assembled a few resources – a Morse code chart, an inexpensive key, an audio oscillator and speaker, and even a few text snippets encoded as Morse audio files from the Internet. Surprisingly, a bit of my old familiarity with Morse had stayed with me across the years. I keyed a few short phrases and found to my great joy that there was no delay or difficulty in initiating them. It also provided good fine motor control exercise for my right hand – another therapy which I needed but had not taken up as yet. I quickly improved at both writing and reading Morse, and soon began tapping out thoughts even if a pencil or fork was all I had available.

After a few weeks, I noticed a marked improvement in my speech initiation too, although I cannot say for sure whether this was the result of Morse practice or normal therapy and recovery. I like to think it was a combination of both. It could also be that any kind of learning*, even just passive reading, is beneficial during this type of recovery.

The biggest surprise is the one that comes with any new comprehension of previously unknown words or language. Now, whenever I hear some innocuous Morse in the background of a film, I read it without even trying; usually it’s meaningful, but occasionally, it’s just gibberish !

 

“I see no reason why intelligence may not be transmitted instantaneously by electricity”
– Samuel Morse

We sometimes give ourselves too much credit for all that’s new and shiny. Staying with telegraphy, here’s a prediction of cell phones (in regards to Marconi’s “Syntonic Wireless Telegraphy”) from 1901!

  • I found a few other tools helpful in the course of my rehabilitation. The concept of ‘syntonicity’ was important, that is, the idea that learning is a subjective, experiential process. I revived my old interest in the Forth programming language, as well as playing ‘management’ games like Minecraft, SimCity, and Civilization. I’ve loved this genre ever since playing a version of Hammurabi written in line-numbered BASIC back in the early 1980s. Exploring, balancing resources, making strategic decisions, and planning for the near and far future are tremendous therapeutic tools.

Scientific Models

I get nervous when people enthusiastically advocate models. This happens mostly, though not exclusively, in sociopolitical debates such as climate change. A scientific model is not a belief system. All those who claim that science is on their side might better consider whether they are on science’s side. Most scientific models, especially statistical ones, are ways to measure and predict the behaviour of a bounded system (a subset of nature). Such a model does not claim to be the final, fundamental, and ultimate description of nature itself. It’s important to not confuse the map with the territory, since rabbit holes rarely appear on maps.

In fact, even a major shift in the underlying paradigm may only correspond to a small change in a model’s accuracy. A very good essay on this was penned by Isaac Asimov. In short, he argued that models may be updated with more subtlety and scope, but they are not summarily discarded as simply and absolutely wrong. Long after we replaced force with space-time geometry in our gravitational models, we still design elevators with Isaac Newton looking over our shoulder. We are all very much still ‘Flat Earthers’ when we plan a walk (unless that walk will take us across a time zone boundary or we sneak a peek at our GPS location along the way).

In this view, science is models and observations only. A theory is only a specific model, not a deep insight into the true nature of reality. Nature simply is. That’s it. Our beliefs about its fundamental structure are illusions. The meaning of observations is model dependent, but the observations themselves are not. Antares is in the same position in the sky, whether it is a red supergiant or “the heart of the scorpion”.

It should be noted that not everyone agrees with this view. For example, Thomas Kuhn argued that newer models are not just ‘better approximations’ (straw man alert) and that it is a serious mistake to teach that they are. He argued that consensus of the scientific community is also a component of scientific truth. I don’t happen to share that view, but then as I consider consensus to be irrelevant in science, my opinion means naught anyway. Of course paradigm shifts and revolutions do happen, but they are artifacts of history and psychology, not objective science. It’s a fallacy to define science as being only the set of all scientists, past and present. Again, there’s that map and territory confusion.

Raw predictive power alone is not the only metric for judging a model. Probably the most universal heuristic is agreement with observation. Many a compelling model has been ruined by the facts. Mathematical beauty, symmetry, and ‘feel’ are not important. Elegance and simplicity are helpful, perhaps even necessary, but not sufficient conditions for a good model. Enthusiasm for a model is a strong cause for suspicion. Humility before nature is a much better heuristic. Darwin and Planck had to be dragged along reluctantly with their own theories. Models can incorporate ‘tricks’ that are obviously not part of reality. For example, adding an equal ‘negative mass’ allows one to easily calculate the centre of mass of a plate with a hole in it. Perhaps the best thing a model can do is to not just answer questions, but to ask even better ones. Heliocentricity and the microbe theory of disease launched tremendously fruitful new science.

Nature is a realm of computation and evolution on an unimaginable scale (quite literally). Local agency leads to emergent complexity. Mathematics is one of the tools that can allow a vastly simplified model of nature to be stored in a three-pound hominid brain. However, daisies and snails know nothing about mathematics. Anthropomorphization of nature is not science. Consensus and comfort are not science. Clinging to teddy bears is really cute in youngsters, not so much in adults. The good news is that nature is far, far, far more wonderful than anything any individual or advocacy group can come up with (including well-crafted teddy bears). You just have to let objectivity take the wheel.

Worth Defending

In ancient times, love of wisdom had a name – “philosophy”. That context brings to mind sun-drenched hilltops in Greece and great libraries like Pergamum and Alexandria. Scholars were few, ‘pure’, supported by patrons or states, and breathed the rarefied air of privilege.

In modern times, we talk of “science”. There are many, many scientists around the world. They work on increasingly specialized research, and increasingly on applied science. They are largely supported by a vast international academic system that is quickly merging with complex economic systems. In fact, direct support from the commercial and corporate world is becoming the norm. Many scientists now breathe the chemical soup of industrialization.

The general public has the widest technology usage rate in history. There are billions as many transistors as people and the number of cell phone accounts is roughly equal to the world’s population.

Technology is almost becoming synonymous with science. “Techno-Science” has reached a level of acceptance and respect that was previously reserved only for religion. Even many of those who remain scientifically illiterate hold science in mystical reverence.

This is the greatest tragedy of all.

The death of pure science directly implies the death of human civilization. We will either be enslaved in a dystopian future or simply be replaced by artificial intelligence. It ends not with a bang, but a whimper.

In 1969, Robert R. Wilson, an advocate of a new particle accelerator research facility that later became Fermilab was testifying before a committee in Washington, DC. His exchange with one Senator has become legendary and is found in many places on the Internet, such as here.

  • Senator: “Is there anything connected with the hopes of this accelerator that in any way involves the security of the country?”
  • Wilson: “No sir, I don’t believe so.”
  • Senator: “Nothing at all?”
  • Wilson: “Nothing at all.”
  • Senator: “It has no value in that respect?”
  • Wilson: “It has only to do with the respect with which we regard one another, the dignity of men, our love of culture. It has to do with those things. It has to do with, are we good painters, good sculptors, great poets? I mean all the things that we really venerate and honor in our country and are patriotic about. It has nothing to do directly with defending our country except to help make it worth defending.”

There’s little I can add to that. Thomas Huxley (“Darwin’s Bulldog”) once said,

“the scientific spirit is of more value than its products”