The Three Amibos Good-Regulator Tutorial

Copyright 2010 by Daniel L Scholten

      

 

Click here to open the Animation Panel

(Note: for best results, click first with your right mouse button and select the “Open Link In New Window” option.  Then use the Alt-Tab key combination to toggle back and forth between the Animation Panel and this tutorial text).

 

Contents


Introduction. 3

Part 1: Systems, Regulators and Models. 12

Systems. 13

Summary. 26

Regulators. 27

Goals and Preferences. 27

Summary. 30

Using Probability To Describe Amibo (i.e. System) Behavior. 31

Summary. 36

Regulation and Surprise. 36

Summary. 42

Models. 43

Summary. 48

Part 2: What The Good-Regulator Theorem Tells Us About How The World Really Works 49

Summary. 56

Part 3: Putting It All Together. 57

Regulation, Expertise and the Learning Imperative. 64

Summary. 71

Part 4: Modelophilia! 73

Summary. 85

Conclusion: The Good-Regulator Theorem and a stable, sustainable society. 86



Back to top

Introduction

The purpose of this tutorial program is to help you enrich your understanding of a fundamental law from the system sciences known as The Good-Regulator Theorem.[1]  This principle, which states that “Every good regulator of a system must be a model of that system”, is a conceptual tool that is indispensible to anyone charged with the task of designing, building and maintaining successful system regulators.

And what does this have to do with you?  Perhaps you think that such people only walk around in white lab coats, work for NASA and spend their free time reading technical journals.  Well, nothing could be farther from the truth.  In fact, it may surprise you to realize that you are exactly such a person.  Even if you don’t see yourself this way (yet), I assure you that you are 100 percent responsible for the design, construction and maintenance of at least one really, really important system regulator: you.

That’s right!  You are a true system regulator.[2]  The “system” you regulate is actually a vast and intricate complex of interacting sub-systems, the exact nature of which varies widely from one person to another, but which certainly includes most of the following:

·        Your own body and its subsystems (cellular, digestive, resperatory, endocrine, nervous, immune, etc.),

·        Your immediate environment (air temperature and pressure, oxygen level, humidity, noise level, light, smell etc.)

·        A complicated network of other people  family, friends, teachers, bosses, co-workers, shop-keepers, police officers, doctors, etc.  each with a unique personality and set of expectations regarding your role and responsibilities in the world),

·        A mind-bending system of signs, symbols, and representations (language, mathematics, traffic signs, advertisements, school books, legal documents, computer desktop icons, restaurant menus, sculptures, grocery lists, telephone numbers, rule books, songs, recipes, instruction manuals, software user agreements, train schedules, tax-returns, licence-plates, etc.)

 

 In other words: your life.  Make no mistake about it, your life is a system and you are the regulator of that system.  Nothing else can do the job.  And like I said, you are 100 percent responsible for the design, construction and maintenance of that regulator, i.e. yourself.  If you aren’t responsible for you, then who is?  It is, after all, your life.  No one cares about it the way you do.  No one else is in a better position than you are to know what is going on in that system. Of course, this responsibility we bear for the regulation of our own lives has almost nothing to do with how competent we may be to actually fulfill that responsibility.  Even if we are the worst regulator-engineers who ever lived, we are still responsible for regulating our own lives.

I know  it is a scary thought but we all have to deal with it.

Now for the good news: we are born already “knowing” how to do the vast majority of all this regulator design, construction and maintenance.  This “knowledge” is permanently encoded in the microscopic knots of DNA that are hidden in every cell of our bodies.  From the moment we are conceived and throughout the rest of our lives our genes perform the vast majority of the actual work of all this regulator-engineering.

Isn’t that great?  We are all charged with this heavy responsibility, but we don’t actually have to learn or do all that much (consciously) in order to fulfill it fairly well.  We certainly don’t have to become neurologists or even read a biology textbook. What’s more, our built-in level of design competence isn’t just high, it’s extremely high.  That’s right! We are born expert regulator-engineers.

Earlier I said that the purpose of this tutorial program is to “enrich your understanding.”  I say this because in a sense you already know quite a lot about the Good-Regulator Theorem, although you might not realize it or know that it is actually a scientific law.  One way to paraphrase the Good-Regulator Theorem is as follows: “A good regulator of an environment is a representation of that environment”.  In a sense, your brain already “understands” this and so it is constantly making and updating 3-D multsensory representations of your immediate environment.  Everything you see, hear, feel taste or smell is actually your brain’s model of your immediate environment and the whole purpose of that model to help you regulate that environment.  Of course, you don’t experience these models as mere representations of Reality.  You experience them as Reality.  I know this may sound like so much New-Age mysticism, but it isn’t.  This is really the way your brain works.  It manufactures multisensory representations of the world in real-time and then uses those representations to regulate that world. 

But this isn’t the only sense in which you are already familiar with the Good-Regulator Theorem.  Everytime we make a grocery list, navigate through a city with a street-map, or use the instructions to assemble a piece of furniture we are also demonstrating a kind of “understanding” of the Good-Regulator Theorem.  A common thread in these kinds of activities is our understanding that they will go more smoothly if we use some sort of representation (list, map, assembly instructions) to guide our behavior.  As the Good-Regulator Theorem tells us: all of the best regulators are models (i.e. representations).

So, you already understand a thing or two about the Good-Regulator Theorem, although you might not realize that you do.  This is a gut-level and purely practical sort of understanding that we demonstrate through our behavior, similar to the way a bird demonstrates its “understanding” of the principles of aerodynamcis whenever it flies, or a house cat demonstrates its “understanding” of communication whenever it meows at its master for food.  But such gut-level forms of understanding can only take us so far.  A bird’s “understanding” of aerodynamics would never produce a jet-craft.  Even a relatively simple task such as ordering a gourmet meal of broiled salmon in a restaurant requires a much higher level of understanding of communication than could ever be grasped by a cat.  Likewise, although our basic practical understanding of the Good-Regulator Theorem (really, its central gist) serves us just fine in most day-to-day situations, we are surely held back in our more ambitious and complex pursuits to the extent that we try to rely on a merely intuitive understanding of the Good-Regulator Theorem.

To achieve an enriched, high-level understanding of the Good-Regulator Theorem is to gain posession of a conceptual power-tool.  Of course, it is a conceptual tool, which means that it can’t be used for things like printing fresh cash or even just fixing flat tires.  Rather, it helps us to organize the “blooming, buzzing confusion” of the world,[3] and thus prepares a useful conceptual foundation that makes possible effective concrete action.  It accomplishes this by defining a “conceptual box”  a particular category of especially useful representation: the Good-Regulator Model.  And like a lecturer’s laser-pointer, the theorem focuses our attention on this conceptual box and explains in precise, no-nonsense terms the value of what that box contains.  And although ultimately the Good-Regulator Theorem will also raise as many questions as it answers, it illuminates the value of those questions, inspires us to find the answers, and, most importantly, gives us a useful description of what the answers will “look like”[4] once we find them.  It is always easier to find something when you are properly inspired to find it and know what it looks like.

As you will see later in this tutorial, Conant and Ashby’s Good-Regulator Theorem is actually a direct cultural analog to the now famous Crick and Watson double-helix model of DNA.  And just as that model of DNA has transformed our understanding of Life, the Good-Regulator Theorem can transform our understanding of Human Culture.  With the help of this tutorial, you will come to see that the Good-Regulator Theorem can be paraphrased as follows:

To the extent that you wish to optimally regulate in the simplest way possible any system that confronts you with a set of distinct, recognizable situations, then whenever you are confronted with any given one of those situations, you must always do the same thing. Conversely, to the extent that you don’t always do the same thing in any given situation then either you will cause instability or else there will be a simpler way to cause the stability you have managed to achieve.

(You will also see why that paraphrase is equivalent to the earlier one given in terms of models).  In other words, the Good-Regulator Theorem is a statement about a very special type of very common decision  the decision to always do the same thing in a given situation.  The importance of this type of decision becomes obvious when we recognize that we humans have a phenomenal capacity to not do this sort of thing  i.e. to vary our behavior in just about any given situation.  Philosophers and theologians call this Free Will and it is one of the great themes of Religion, Philosophy, Art and Literature.  Of course, it is precisely because of our Free Will that we humans need to understand the Good-Regulator Theorem, at least to a certain extent.  We need to know, at least to some extent, that our freedom comes at a cost and, of course, we all do know this and vividly demonstrate that knowledge in just about every moment of our lives with an intricate and highly structured complex of patterned behaviors that range from simple habits (e.g. how much toothpaste we squirt onto the brush before brushing our teeth) to complex skills (e.g. how a professional baseball pitcher winds up before a pitch).  These types of patterned behaviors  many of which require for their performance certain types of artefacts (toothbrushes, baseballs, etc.)  are the very building blocks of Culture and they are all essentially just decisions to “always do the same thing in a given situation”.  In fact, Culture could even be defined as the sum total of all of the ways that human beings have decided to “always do the same thing in a given situation.”  The analogy is clear: the Good-Regulator Theorem says about Culture what the Crick and Watson model of DNA says about Life: first, that there is a basic building-block and more importantly what that basic building-block actually is (the decision to always do the same thing in a given situation).

This conceptual power-tool can transform our vision of the profusion of cultural habits, routines, rituals, skills and artefacts that surrounds us and which we often take for granted.  Just as the telescope transformed humanity’s vision of the Physical Universe, the Good-Regulator Theorem can transform humanity’s vision of this Cultural Universe.  And just as the shift in World-View inspired by the Telescope helped initiate the last two-hundred years of scientific acheivement, the Good-Regulator Theorem also holds the promise of a similar wave in cultural progress.

This is a bold and colorful claim that demands evidence to support it and in fact such evidence is just about everywhere!  This evidence began slowly to accumulate with the arrival of the very first humans, but its growth became explosive when the first writing systems were developed about 5000 years ago in the regions we now call Irak and Egypt. That landmark event represents a vertical jump in Humanity’s “understanding” of the Good-Regulator Theorem, or really its gist regarding the crucial role that representation plays in the regulation of complex, dynamic systems.  Since then this higher level of “understanding” has become widespread common knowledge, although it has tended to remain in a largely vague and implicit form. Furthermore, in our modern world, this “understanding” can be seen to exist along a continuum.  At one end are people who have the most rudimentary grasp of what the Good-Regulator Theorem tells us about the world and at the other end are people with a much deeper, though still intuitive, understanding  people who know how to apply the gist of the Good-Regulator Theorem in a wide variety of contexts, and whose lives are profoundly enriched by this basic principle  although, again, they might not realize that they know it so well or that it’s actually a scientific law.

And how do we recognize these people?  Well, as a rule-of-thumb (with exceptions, of course) the people who already deeply understand (abeit intuitively) the Good-Regulator Theorem are the ones who truly succeed in life.  These are the people who have learned to appreciate the value of a good model, and they apply their understanding on a daily basis in a variety of tasks.  These people are focused, organized and efficient. They know their priorities, set goals, make plans and carry them out  regardless of the domain.  They are in control of their world.  In effect, they are the regulators of the systems in which they participate.  They can respond effectively to setbacks and find creative ways to overcome obstacles. These people are (usually) the winners in the game of Life.

And the others?  You guessed it.  These are Life’s muddlers (with exceptions, again).  These people are disorganized and have few important goals or ambitions. They are confused about their priorities, are never quite sure about where they are going and they are often more than a little depressed about it.  Of course, even these people must have some ability to regulate their lives, but they often do so poorly.  These people are (usually) Life’s great under-achievers.

Exceptions exist at both ends, basically because the Good-Regulator Theorem (or rather, the principle it encapsulates) is just one essential ingredient for successful achievement and there are others  including (but not limited to) Hard Work and good old-fashioned Luck.  Sometimes the muddlers get lucky or make up for their muddling by doing so much of it that they compensate for their organizational shortcomings.  At the other end of the spectrum are those who, despite having a solid grasp on at least the gist of the Good-Regulator Theorem, still fail to achieve their goals because they just can’t catch a break.  The Good-Regulator Theorem is no magic potion or cure-all, but it does encapsulate an important fundamental fact about the world that we simply must come to understand, much like the fact of Gravity.  Anyone who fails to understand the basic fact of Gravity is doomed to a lot of crawling around in the dust.  On the contrary, an enriched, high-level understanding of Gravity is essential to the design and construction of jet planes.  The situation is similar with the Good-Regulator Theorem.

Getting back to my claim about the Good-Regulator Theorem’s promise of cultural progress, the evidence for it can be seen in the results that are achieved by the people who truly do understand it, or at least its central gist.  These people can be found in all walks of life, making important cultural contributions.  These are the great bosses, the effective parents, the innovative engineers, political leaders, and school teachers to name a few examples. These are the people who are active in their communities, who are passionate about education and who are literate about or who participate in the sciences and arts.  These are the song writers and sculptors; these are the forward-thinking legislators, the resourceful business leaders, and the cutting-edge medical researchers.  In short, these are the people who are actively engaged in cultural progress and one thing they all have in common is that they all understand the key role that models and representations play in the pursuit of their highly ambitious and complex life goals.  They might not recognize that this is what they are doing or even think there is anything particularly special about it.  It’s just something they do naturally, like when a bird flies.   But the truth is that they all invest a great deal of energy into these models and representations: they study them passionately, practice using them and figure out ways to improve them.  And they do this because deep down inside they know that their own success depends on it.

Are these people successful simply because they have understood the Good-Regulator Theorem (or at least its gist)?  Of course not.  Although we need hammers to build houses, simply having hammers is not enough.  On the other hand, we do need hammers and the Good-Regulator Theorem is a bit like that.  It’s certainly not the only tool we need to be successful, but we need it and we have to acquire it somehow and this tutorial program is an excellent way to do just that. 

 

This tutorial program has been designed to help you achieve the kind of enriched, high-level understanding of the Good-Regulator Theorem that you will certainly need to pursue your most ambitous and complex life goals and to make your own unique contribution to the development of the human Cultural Universe. It is a dynamic interactive experience that uses colorful animated graphics to illustrate the various building-block ideas and to illuminate your imagination and most importantly your motivation.  At the core of these animations are three curiousity-provoking, amoeba-like creatures called amibos  the “three Amibos”.  Their names are Simbo, Rumbo and Zoombo.  An amibo is just about as abstract as an organism can get.  It has no fixed or final appearance, structure or behavioral repertoire.  Each time you open this program Simbo, Rumbo and Zoombo will all have different and randomly generated basic shapes and colors and the set of behaviors that each can exhibit will also change.  About the only thing that will remain constant, apart from their names, is the position and background color of each amibo’s performance frame.  Here is just one example of what Zoombo has looked like at a given moment in time in a past performance:

 

Although these amibos appear and behave much like true organisms they are actually quite abstract and we will use them to illustrate a very general picture that will help us to think deeply and clearly about the core ideas of System, Model and Regulation.

 

This concludes the Introduction to the Three Amibos Good-Regulator Tutorial. Before you continue on, take a moment to familiarize yourself with Simbo, Rumbo and Zoombo.  The first thing you’ll want to do on the Animation Panel (use link below) is to press the button labeled “Go” at the bottom right of the panel.  This will set the amibos into motion.  The next part of the tutorial will explain in greater detail what Simbo, Rumbo and Zoombo are actually doing, but you might be able to figure it out on your own just by watching.  Another thing you might try is clicking on any cell of the “Game Matrix” that appears in the lower left of the Animation Panel.  Doing so will stop the animation and open a separate “demo” panel that will allow you to explore each amibo in greater detail.  Again, this function will be explained in the next part of the tutorial, but you will probably learn a lot just by exploring.  Don’t worry about breaking anything.  Feel free to play and experiment with any of the buttons or controls.  Don’t feel like you have to master or completely understand anything during this phase of your learning.  Your goal is just to explore and watch.  Everything will eventually be explained to your satisfaction.  

Of course, most importantly, have fun!

 

Click here to open the Animation Panel

(Note: for best results, click first with your right mouse button and select the “Open Link In New Window” option.  Then use the Alt-Tab key combination to toggle back and forth between the Animation Panel and this tutorial text).

Back to top

Part 1: Systems, Regulators and Models

 

The Good Regulator Theorem can be paraphrased in various ways:

·        “Every good regulator of a system must be a model of that system.”

·        “All of the best system regulators are representations of the systems they regulate.”

·        “A good regulator is a model of what it regulates.”

·        “Every good key must be a model of the lock it opens.”

·        “Every good solution must be a model of the problem it solves.”

These sorts of aphorisms are certainly useful and go a long way toward explaining the Good-Regulator Theorem, but we won’t stop there.  The purpose of this tutorial is to take you beyond these kinds of bumper-sticker slogans to the realm of true insight and understanding. So, let’s go back to the first one:

“Every good regulator of a system must be a model of that system.” 

This was actually the title of the 1970 paper by Roger C. Conant and W. Ross Ashby in which these two systems scientist first articulated this idea and proved it as a mathematical theorem.  But what does this statement mean exactly?  In order to answer this question we first have to clarify what we mean by the three terms System, Regulator and Model.  You probably already have your own understanding of these terms, but because they are quite abstract and because each can be defined and used in various ways, we are going to invest some effort into making sure that we are all on the same page regarding what these terms actually mean.  This is no trivial task, but to make it easier we are going to make ample use of our animated amibo friends Simbo, Rumbo and Zoombo.

Let’s begin with the term System.

 

 

Back to top

Systems

The term System is so general that it’s basically a synonym for the word thing.  Pretty much any thing can be considered as a system; and, of course, every system is a thing.  Systems can be extremely large (the entire Universe is a system) and they can also be extremely small (a single atom is a system).  Commonly discussed systems include (but aren’t limited to) the following:

·        Economic systems,

·        Political systems,

·        The Legal system,

·        Ecosystems,

·        Public transportation systems,

·        The cardio-vascular system,

·        The digestive system,

·        The nervous system,

·        A telecommunications system,

·        A “winning” system (for winning a game),

·        A belief system;

Sometimes we don’t even use the word system to refer to something that is, in fact, a system:

·        A family,

·        company,

·        city,

·        lake,

·        dog,

·        jigsaw puzzle,

·        automobile,

·        the internet,

·        a sculpture,

·        dance performance,

·        song,

·        book,

·        painting,

·        gourmet meal;

 

It may surprise you to think of some of the items on that list as systems.  Is a song really a system?  A gourmet meal?  A dog? To answer these questions, let’s consider in more detail what we really mean by the term System.  The Wikipedia defines a System as “a set of interacting or interdepent entities forming a single whole.”[5]  Well, a song does consist of individual notes but when we listen to a song, we don’t really hear the individual notes as such.  What we actually hear is the whole song.  We might say that the notes work together to form the whole song.  So, yes, it would appear that a song is a type of system.  A similar argument applies to the individual ingredients  the meat, vegetables, spices and wine  that work together to compose a gourmet meal.  And finally, a dog is much more than just a collection of dog-parts.  The legs, nose, tail, etc. all work elegantly together to form the actual dog.

Another aspect of systems is that they can exhibit various states or behaviors, although we need to be somewhat careful with what we mean by these terms.  The term state is fairly simple: it just refers to a particular arrangement of the various components of a system  how their individual states or behaviors stand in relation to one another.  For example, a jigsaw puzzle can be in the state “assembled” or “not assembled”, depending on how its pieces are arranged, say, on a table.   And an automobile is either “running” or “not-running” depending upon how its various parts are either “running” or “not-running” in with respect to one another.

The term behavior can be a bit more complicated, depending on the system.  As it applies, say, to a system of cat-parts that compose an actual cat, it is fairly clear what we mean by cat-behavior: walking, eating, sleeping, etc. But what could we mean by the “behavior” of a jigsaw puzzle?  Here we seem to be stretching the word behavior into the realm of the figurative, but we can resolve this problem by defining the word behavior simply to mean “a change in the state of the system.”  If we do it this way then as long as we know what we mean by a system’s state, then we can be quite clear about its “behavior” as a given change in that state.  So, when a jigsaw puzzle changes from the state “assembled” to “not-assembled”, we can refer to this change as the jigsaw puzzle’s behavior.

It might occur to you at this point that there appears to be a huge qualitative difference between a cat’s behavior and the behavior of a jigsaw puzzle.  When a jigsaw puzzle changes from its “not-assembled” state to its “assembled” state, what most likely happened was that some human being did all the work of arranging the pieces and putting them together. The point here is that it was really the human being that did all of the actual behaving.  But when a cat changes from its sleeping state to its eating state, nobody came from the outside to change the cat from one state to the other. The cat did all of its behaving.

It may surprise you to realize that these two situations have a lot more in common than they may appear to.  What links them together is the idea of Energy.  Both the jigsaw puzzle and the cat required energy in order to make the state changes we are calling their respective behaviors.  The difference between the two is that the cat is able to harvest its own energy and to invest that energy into its various behaviors (state-changes) whereas the puzzle can only wait around until some human being does all the energy harvesting and investing.  The ability of a system to harvest energy and use it to change its own states is a defining characteristic of those very special systems we call “living” or “biological”.  But biological systems are just one class of systems and as long as we define behavior simply as a change-of-state then we can speak of even non-biological systems as behaving.  In any case, throughout this tutorial, this is what we will mean what we refer to a system’s “behavior”: a change in the state of the system.

 

The idea of a System is quite general and abstract and whenever we discuss systems in general it can make the job easier if we have some concrete examples in mind. As you work your way through the tutorial you can refer back to the lists shown above if you wish, but for the most part we are going to rely on our “Three Amibos” Simbo, Rumbo and Zoombo to provide these concrete examples.  An individual amibo can be thought of as a system.  It has various components  mostly color and shape  and these colors and shapes can change state which is to say that an amibo can behave.

Now, you might see another potential problem in all of this having to do with boundaries.  Consider a system of water molecules that are currently arranged in a typical ice-cube. We could recognize at least two states of this system: solid and liquid.  But wait a minute, because we can also recognize many more than just two states as well.  For example, the cube might be 90 percent solid and 10 percent liquid; or 80 percent solid and 20 percent liquid; but also we can recognize such in-between states as 85 percent solid, or 87.00015889 percent solid, etc.  If we wanted to get creative about it we could also number the individual molecules so as to distinguish between them.  Then we could recognize a state wherein molecule numbers 1 through 10,000,000 are solid, and the others are all liquid; and another state wherein all of the even-numbered molecules are solid and the odd-numbered molecules are liquid.  The drift here is that for a given block of ice, we have a lot of lee-way regarding where we draw the boundary between one state and another.

This isn’t really a problem.  In fact, it’s actually a resource because it gives us enormous flexibility and freedom to choose where we place the boundaries, or in any case, where we imagine the boundaries to be.  On the otherhand, when discussing systems and their behavior  so as not to confuse ourselves and the people we’re talking to  we do need to make a firm choice about where we want to place these boundaries, although we are always free to change our minds about it later on if we realize that we might make a better choice.

Getting back to our amibo friends, whenever we refer to a particular behavior, what we are really refering to is a well-defined and repeatable sequence of individual state-changes.  To see vividely what this means, take a moment now go to the animation screen and click once on any cell in the Game Matrix that is in the lower portion of that panel.  Doing so will cause a window to open in which will appear one of the amibos.  This window will contain, among other things a “performance frame” in which the amibo can demonstrate its various behaviors. Here is an example of what it will look like (the image you will actually see will be different because each time you start this tutorial, the amibos are randomly re-generated and assigned completely new behavioral repertoires, colors, and resting states):

 

 

Notice that at the top of the frame is shown the size of the amibo’s behavioral repertoire (in the case shown above we see that Zoombo can perform 100 different behaviors).  At the top of the performance frame is shown which of the amibo’s behaviors is being demonstrated.  In the example show the behavior is labeled Z(47).  (The “Z” is there to emphasize that it’s Zoombo’s behavior number 47  on Rumbo’s performance panel it would have been labeled R(47) and it would have been S(47) on Simbo’s).  You can change which behavior will be demonstrated by using the behavior number selector that lies just below the performance frame (labeled “Beh #” for Behavior Number).  To change the behavior illustrated you can either type in the number of the behavior you wish to examine or use the little up and down arrows to the right of the number display to step sequentially through the numbers. Once you have selected a behavior to examine you can press the “Execute” button and the amibo will execute the entire behavior from beginning to end.  You can repeatedly press the “Execute” button to study the behavior in more detail.  If you want to examine the behavior in even more detail you can press the “Step” button which will walk you through each of the individual state changes that compose the given behavior.  Before you continue reading, stop now and take a few moments to play around with these various functions. 

One point to notice as you are examining the amibo’s behaviors is that all of the behaviors follow a similar pattern.  Each behavior begins from the amibo’s resting state, progresses and then returns to that resting state.  This pattern is somewhat arbitrary and was mostly done to give the amibo’s a recognizeable structure and identity (otherwise they just look like a bunch of lines and colors moving all over the place).  Real systems rarely show this kind of consistency in their behaviors. For example, when you peform the following sequence of behaviors  wake up, brush your teeth, get dressed  you certainly don’t stop and return to some fixed resting state between each behavior. Although this is a major difference between the amibos and just about every other imaginable system, it is not really a difference that we have to worry about.  The important thing is that amibo’s can behave, just like all systems can behave and that they do so by moving through sequences of states.  What is also important is that the boundaries between one behavior and another have been clearly established.  As discussed earlier, these boundaries have been arbitrarily selected (and in fact, each time you open the program they are different), but they have been established, so that we can sensibly discuss them as discrete behaviors.

Another point to notice is that each amibo has a finite repertoire of behaviors.  In the example shown above, Zoombo can perform 100 distinct behaviors. This idea that a system can have a finite repertoire of behaviors shares the same boundary problem that we examined with any given behavior.  How many behaviors can a cat perform?  Again, it depends on where we choose to place (or imagine) the line between behaviors. A cat can sensibly said to have just one behavior  dying  which is the behavior it exhibits as it changes from the state of being alive to that of being dead.  On the other hand, if we divide the “being alive” state into the two states “alive and eating” and “alive but not-eating” then the cat can just as sensibly be said to have the following four behaviors:

1.       Changing from the “alive and eating” to the “alive but not-eating” state,

2.     Changing from the “alive but not-eating” to the “alive and eating” state,

3.     Changing from the “alive and eating” to the “dead” state,

4.     Changing from the “alive but not-eating” to the “dead” state.

 

(Clearly the cat can’t come back from the dead whether to eat or not, so these behaviors must be excluded from the cat’s behavioral repertoire.)  The point here is that just as we are free to group a system’s states together to form discrete recognizable behaviors, we are also free to do this with every possible state to form a finite repertoire of such recognizable behaviors.  And we can use this approach with any system whatsoever, whether it is a cat, a kangaroo, an automobile or a jigsaw puzzle.  If it is a system with at least two states, then we can group those states into a finite number of discrete behaviors.  In our amibo example shown above, the system has 100 behaviors.

You might also notice that these behaviors are numbered in sequence starting from 0. Why are they numbered from zero and not one?  This is really just an artefact of the programming.  When a computer program processes an array or a matrix it uses something called an index and indexes are usually numbered starting at zero.  I could have numbered the behaviors starting from 1, but then I would have had to write the program to keep translating between the behavior’s index and its display value and I was too excited about writing the program to worry about that when I started writing it, and then as the program evolved I never got around to changing it.  In any case, it really doesn’t make much of a difference one way or another whether we number the behaviors from 0 or 1.  Maybe in a future release I’ll change it.  I only mention it now in case you wondered about it.

Now, a major goal of the system sciences is to study systems at a very high conceptual level and to leave the nitty-gritty details to the particular sciences.  This means that we need very high-level tools to study them with and a very high-level vocabulary to describe what we’re talking about.  The goal is to identify and examine those attributes that are held in common by all systems.  All systems can be said to have parts that work together, and they all can be said to have states and to exhibit behaviors (changes of state).  Another important and measurable aspect of all systems is the liklihood or probability that a system will change from one given state to another.  We will now examine this aspect as it applies to the amibos.

First of all, you may have noticed that the amibo demo window contains a tab labeled “Details”.  If you click on it the window will change to look something like the following:

 

 

(Note: the actual numbers you see displayed when you do this will be different because they are randomly re-generated each time the program is started.)  Also, each amibo has a table that is somewhat different from the other two (you should take a moment to go and check this out before continuing).  These differences will be explained shortly, but for now we will focus on what they all have in common which is that each table shows various probabilities that describe how likely a given behavior will be executed.  In the example shown above we see that Zoombo will execute its behavior Z(2) with probability 0.0158.  What this means is that if you were to stand back and observe Zoombo’s behavior over a very long period of time, and while doing so if you were keep two separate tallies  one tally of each time that Zoombo executed any behavior and another tally of each time that Zoombo executed behavior Z(2), what you would eventually discover is that for every 10,000 executed behaviors, Zoombo executes Z(2) approximately 158 times.  Similarly, you would observe Zoombo execute behavior Z(5) approximately 77 times out of every 10,000 executed behaviors, and so forth.

Note that we are talking about probabilities here which means that before you started counting you would not necessarily be able to predict which of those 10,000 behaviors would finally turn out to be the 158 or so executions of Z(2).  Nor could you even be sure that Z(2) would be executed exactly 158 times. All you could say is that the final number will be close to 158  maybe more, maybe less, maybe even exactly 158, but in some sense it will be close to it. Notice that this probabilitistic description of a system’s behavior does not rule out the possibility that for any particular system you might actually be able to predict with perfect certainty how many and which executions will be S(2), but remember that we are looking for properties that all systems share and although some systems are completely predictable, many aren’t and both types of systems can be described in terms of probabilities.

This is a very high-level way of discussing the behavior of a system.  It can be applied to all systems and is completely independent of any nitty-gritty details associated with a given, specific system.  Don’t worry for now about the other information that is written in the text area above the probability table or in the Notes column of that table.  We will examine these parts of the Details panel a little later on.  The important thing to understand at this point is that an amibo has a repertoire of behaviors and that it is executing these behaviors according to the table of probabilities shown in the Details panel of the behavior demo window.

 

The last point we need to cover regarding the term System has to do with the idea that two or more systems can interact. That is, they can communicate, exchange resources, rely on each other (or attack each other, for that matter) and so forth.  Whenever two or more systems interact in some way they form yet another, larger system. We can use our amibos to illustrate the interaction between systems.  Remember that we actually have three of them  the “Three Amibos”.  Let’s take another look at the Animation Panel:

 

 

You surely noticed that at the top of the panel we have what appears to be some sort of an equation.  “Simbo + Rumbo = Zoombo.”  What does this mean?  Well, first of all, it is not any sort of numerical equation.  The plus (“+”) and equals (“=”) signs are not being used here in the same way we use them to symbolize, for example, the idea that 3 + 5 = 8. Here these symbols are being used here to express the idea that Rumbo’s behavior is interacting with Simbo’s behavior to produce Zoombo’s behavior.  The consequence of this interaction is precisely specified by the “Game Matrix” shown in the lower left portion of the Animation Panel.[6]  Notice that the Game Matrix is made up of three different types of cells, each with a color that corresponds to the background color of the associated amibo’s performance frame (blue for Simbo, yellow for Rumbo and green for Zoombo). In the center of each cell is displayed a number that corresponds to the number of one of the behaviors of the associated amibo.  For example, the yellow cell numbered “5” corresponds to Rumbo’s behavior R(5); the blue cell numbered “2” corresponds to Simbo’s behavior S(2); and the green cell numbered “37” corresponds to Zoombo’s behavior Z(37). 

Actually, you will notice that many of Zoombo’s green cells are replicated at various places throughout the matrix.  Let’s examine this more closely in the close-up of the matrix shown below:

 

 

Ignore the magenta-illumination for the moment, and notice, for example, that there are three green “94” cells  one in the column beneath the blue Simbo “0” cell and two beneath the blue Simbo “3” cell. All three of these green “94” cells correspond to Zoombo’s behavior Z(94).  Likewise, the two green “44” cells located in the columns beneath Simbo’s blue “6” and “15” cells, respectively, correspond to Zoombo’s behavior Z(44).  (You should take a moment to find some other examples on your own.)  On the other hand, each of Simbo’s blue cells and Rumbo’s yellow cells appear exactly once.  The reason for this is that whole purpose of the Game Matrix is to precisely and completely define what it means to “combine” one of Simbo’s behaviors with one of Rumbo’s behaviors.  The Game Matrix accomplishes this by displaying the result of that combination in the cell that lies at the intersection of the column beneath the given Simbo behavior and the row to the right of the given Rumbo behavior.  So, for example, if you want to know what happens when we combine Rumbo’s R(9) behavior with Simbo’s S(11) behavior we begin by placing our finger on Rumbo’s R(9) behavior and then sliding our finger to the right, following the row of cells until our finger is on the cell that is directly below Simbo’s S(11) behavior.  When we do this our finger is right on top of the result of that combination.  In this case R(9) + S(11) = Z(71).  This is also the case that is highlighted by the magenta-illuminated cells in the above image, but the process works for every possible combination of a Simbo behavior with a Rumbo behavior.  The magenta illumination occurs with whichever combination is currently being illustrated by the amibo animation.

The reason that Rumbo’s and Simbo’s cells appear just once each is because we want to assign exactly one unambiguous meaning to each possible combination.  If either Rumbo or Simbo had repetitions then we would have ambiguity which is not what we want.  On the other hand, such ambiguity with respect to Zoombo’s behaviors is not only acceptable but serves a specific function.  The replication of at least some of Zoombo’s behaviors is intended to illustrate an important property of the way systems can interact which is that there is often more than one way to produce the same output given different inputs.  As the saying goes: “there is more than one way to skin a cat.”  For example, you probably know at least a couple of routes that you can take to work, school, the supermarket, etc.; you can push a doorbell with your right hand or your left; from basic arithmetic we know that 12 = 2x6 and also that 12 = 3x4, etc.  Although this is not a necessary property of all systems it is very common and makes for a much more interesting analysis.  If you like, once you finish this tutorial you will have what you need to work through the case in which Zoombo’s behaviors also aren’t repeated throughout the Game Matrix.  You might also like to consider the case in which either Simbo’s or Rumbo’s behaviors are repeated.

At this point it would be a good idea for you to take a break from reading, go to the Animation Panel and work through the following observational exercises:

Exercise 1.  Hit the “Go” button in the bottom right corner of the Animation Panel and spend a few minutes just watching the animation.  As you do so, notice the following points:

·        The amibos behave according to the Game Matrix.

·        The Game Matrix illuminates the current behaviors of the amibos.  That is, when Simbo executes its S(n) behavior, the cell of the Game Matrix that corresponds to Simbo’s behavior S(n) turns bright magenta.  Similarly for Rumbo and Zumbo.

·        Zoombo’s behavior is always that shown at the intersection of the row and column containing Rumbo’s and Simbo’s currently executing behavior.

·        The transcript on the lower right records the history of the whole system.

Exercise 2. 

·        Focus on Simbo only.

·        Notice the coordination between Simbo’s behavior in all three places: in the top part of the panel, in the game matrix and in the transcript.  For example, notice that when Simbo is executing S(5) in the top part of the Animation Panel, then in the Game Matrix and in the transcript you will also see that S(5) is executing.

·        Repeat this exercise for Rumbo and Zoombo.  The goal here is to verify the coordination between all three parts of the Animation Panel.

Exercise 3.

·        Click on the “Restrict Simbo” button that is just below Simbo’s main performance frame.  Pick any one of Simbo’s behaviors and push the “OK” button.  This will restrict Simbo to repeating the behavior you have chosen.

·        Observe what happens with Rumbo and Zoombo.  The idea here is to see that when Simbo always does the same behavior, Zoombo’s behavior is always selected by Roombo from the same column.  In particular, notice that when this is the case, Zoombo’s behavioral repertoire is effectively reduced because Zoombo no longer has access to behaviors that are not listed in the given column.

·        When you are finished, click on the “Free Simbo” button to allow Simbo access to its full behavioral repertoire.

Exercise 4.

·        Repeat Exercise 3, but this time use the “Restrict Rumbo” button.

 

Summary: throughout this tutorial we will use the term System to mean a group of at least two elements that interact in some way to form a whole.  Systems can have one or more states and whenever a system changes from one state to another we will say that the system is behaving.  We will use the term system behavior to refer to certain agreed upon sequences of changes in the state of a system.  We will also use an amibo to represent the idea of a System in general.  Like any system, an amibo has various states and those states are grouped into discrete sequences which are the amibo’s behaviors.  An amibo’s behavioral repertoire has a fixed size and consists of all of the behaviors that an amibo can execute.  Finally, two or more systems can interact.  To define such an interaction between amibos we will use a Game Matrix.  Such a matrix defines the result of combining a behavior of one amibo (Simbo) with a behavior of a different amibo (Rumbo).  This result is that a third amibo (Zoombo) executes some one of its behaviors, which appears in the cell of the matrix that lies at the interesection of the row and column picked about by the behaviors of the first two amibos. 

 

Back to top

Regulators

Goals and Preferences

Now let’s examine what we mean by the term Regulator.  Key to the process of Regulation is that some agent (person, thing, system, etc.) is striving to achieve a goal along with the idea that there is some source of obstacles that threatens to block the agent’s attainment of that goal.  The term Regulation refers to what the agent does to overcome those obstacles and thereby attain the goal.  Thus, to the extent that the agent engages in this sort of activity, that agent is said to be a Regulator.  Throughout this tutorial, Rumbo is going to be the agent who is striving to achieve a goal and Simbo will be the source of obstacles. The goal that Rumbo is striving for is to get Zoombo to execute certain preferred behaviors.  Finally, the whole point of this tutorial is to show that when Rumbo is successfully regulating Simbo’s obstacles and thus getting Zoombo’s behavior under control, it must be the case that Rumbo is a model of Simbo.  Why?  Because “every good regulator (Rumbo) of a system (Simbo) must be a model of that system.” In this scenario Rumbo is the good regulator, Simbo is the system, and Rumbo must be a model of Simbo if Rumbo is to get what Rumbo wants.  One thing I want to point out at this point is that the key word here is must.  The theorem doesn’t say “can” or “could” or “might”.  It says “must”.  Rumbo must model Simbo if the goal is to be attained. We will come back to this point later on.

To see how our amibos illustrate these ideas, let’s take a closer look at the Game Matrix:

 

 

As discussed in the previous section, this matrix completely defines the meaning of the plus (“+”) sign used in the upper part of the Animation Panel.  It defines what happens when any given Rumbo behavior is combined with a given Simbo behavior.  According to the Game Matrix the outcome will always be some one of Zoombo’s behaviors. Now, we can imagine that Rumbo might have a vested interest in Zoombo’s behaviors.  Maybe Zoombo is Rumbo’s boss, or automobile or local supermarket.  We want to stay abstract here, but we can suppose that Rumbo is concerned in some way with what Zoombo actually does.  For the sake of argument and to keep things simple, let’s arbitrarily assume that if Rumbo were always free to choose it, he would always choose that Zoombo do Z(0).  Please realize that my choice of Z(0) as Rumbo’s favorite is quite arbitrary. I could have picked any of Zoombo’s behaviors to make the point but choosing Z(0) will make our discussion more streamlined for reasons that will become obvious in a moment.  So, Z(0) is Rumbo’s favorite.  Unfortunately for Rumbo, Simbo is also on the scene and Rumbo’s choices are limited by Simbo’s behaviors.  This is what we mean by Obstacle.  Whenever Simbo acts, it is as if Rumbo’s access to Zoombo’s full behavioral repertoire is blocked  only some of Zoombo’s behaviors are “available”.  For example, when Simbo does S(5), Rumbo is effectively blocked from making Zoombo do Z(0).  The reason is that Z(0) doesn’t appear anywhere in the column for S(5).  This means that whenever Simbo does S(5), Rumbo must make do with some other choice.

Now, in order to stream-line our discussion, let’s assume (arbitrarily again) that whenever Rumbo is unable to cause Zoombo to perform behavior Z(0), Rumbo’s next-in-line favorite is Z(1).  To continue on, if Z(1) isn’t available, Rumbo would choose Z(2) and in general, if Z(n) isn’t available, Rumbo would choose Z(n + 1).  I want to emphasize that my choices for Rumbo’s preferences here are completely arbitrary.  I could just as well have picked Z(5), Z(99), Z(54) and Z(6) for the top four, Z(32) for the 76th most favorite, and Z(0) for last place or any other permutation of Zoombo’s behaviors.  The only thing that is important here is that Rumbo has some way to rank Zoombo’s behaviors.  I just picked Z(0), Z(1), …, Z(99) because it happens to be the simplest one to discuss (otherwise I’d have to insert a table showing the order for all 100 of Zoombo’s behaviors and then keep referring you back to the table).

Getting back to the scenario above, when Simbo does S(5), Rumbo wishes that Z(0) were available but it isn’t, so Rumbo looks for Z(1) which also isn’t available. Next in line are Z(2), then Z(3) and so on until we get to Z(13) which  finally  is available and so Rumbo goes for it which means that Rumbo executes its behavior R(6) which causes Zoombo to execute Z(13). Now, let’s suppose Simbo does S(12). Looking through the column beneath S(12) we see once again that Z(0) is not available.  And although Z(13) is available so is Z(2) which has a higher preference rank than Z(13) and so Rumbo executes its behavior R(7) and causes Zoombo to execute Z(2).  This scenario effectively describes what we mean when we say that Rumbo is regulating Simbo’s behavior  Rumbo is responding Simbo so as to make Zoombo execute behaviors according to Rumbo’s preference ranking.

Here we should distinguish between the ideas of Regulation and Control. This might seem a little unusual. In everyday parlance these words are often synonyms for each other, but under our current circumstances it makes sense to distinguish between them.  Control implies causality. When we say that “A controls B” what we mean is that A causes B to do something.  With this in mind we can see that Rumbo is not really controlling Simbo’s behavior.  Only Simbo is controlling Simbo’s behavior.  Also, it seems clear that Simbo shares control with Rumbo over Zoombo’s behavior and that Zoombo pretty much has nothing to say about it.  Finally, Rumbo’s behavior is being controlled both by the choices offered by Simbo’s behavior and by Rumbo’s preferences for Zoombo’s behaviors.

On the other hand, Regulation involves the idea that some agent is attempting to attain a goal, along with a source of obstacles that threaten to prevent the agent from attaining that goal and the Regulation is a way of handling those obstacles to restore access to the goal. Rumbo’s goal is to make Zoombo execute behaviors that are close to Z(0) (where “close” is defined by Rumbo’s preference ranking).  This is Rumbo’s goal.  But Simbo’s behavior effectively blocks access to that goal by restricting the choices that are available to Rumbo.  We say that Rumbo regulates Simbo’s blockage by always picking out the particular Zoombo behavior that is closest to Z(0).  As defined in this context, Regulation and Control are related, but distinct concepts.

Of course, if you go back to your Animation Panel and try to watch for this type of regulation you won’t see it because the example I’ve been discussing is purely hypothetical.  At this point in the tutorial the sort of preference ranking we’ve considered does not actually describe Rumbo’s behavior.  At the moment we are simply trying to clarify what we mean by the term Regulation and to show how the amibos could be used to illustrate it.

Summary: each of Simbo’s behaviors effectively blocks Rumbo’s access to some of Zoombo’s behaviors while simulataneously offering access to an an abbreviated list of Zoombo’s behaviors from which Rumbo must choose.  Thus, in order for Rumbo to exert genuine control over Zoombo’s behaviors, Rumbo needs some way to rank the behaviors that do show up in the restricted list that Simbo’s behavior produces.  This means that Rumbo needs a way to rank Zoombo’s entire behavioral repertoire.  As I said before, we certainly do not require these to be ranked Z(0), Z(1),..., Z(99) but there needs to be some sort of preference ranking in order for us to say that Rumbo is controlling Zoombo’s behaviors. Once Rumbo has this preference ranking, Rumbo can regulate Simbo’s behaviors in order to exert control over Zoombo.  Notice that without this preference ranking, the whole idea of Regulation just sort of disappears.  Without the preference ranking Rumbo has no way to choose from the options that Simbo offers.  Rumbo can still execute behaviors, but not in any sort of goal oriented way.  We would just have Simbo doing stuff and Rumbo doing stuff.  Zoombo would still be controlled by Simbo and Rumbo, but Rumbo is not doing any actual regulating.  Key to the notion of regulation is the idea that somebody is pursuing a goal of some sort, which is to say that they are selecting outcomes according to some preference ranking.

Back to top

 

Using Probability To Describe Amibo (i.e. System) Behavior

A moment ago I mentioned that Rumbo’s behaviors are not, at this point, being controlled by a preference ranking of the sort we have been examining.  So, what is controlling Rumbo’s behavior? Well, a few pages back we looked at the Details tab of Zoombo’s behavioral demo window and saw that Zoombo was being driven by a table of probabilities.  I also explained that Simbo and Rumbo are being driven by probabilities and mentioned that their tables were a little different from Zoombo’s and from each other’s.  It is time to examine these tables a bit more carefully.  Toward that end, all three are lined up next to each other here so that we can them:

 

 


           

                Simbo’s Details                                      Rumbo’s Details Zoombo’s Details

 

 

 


One point I want to make here is that while both Simbo and Zoombo have tables with just one column of probabilities, Rumbo’s table has several columns.  In the image you can only see the first column of probabilities and the beginning of the second so you should take a moment to flip over to your own Animation Panel, open up a demo window for Rumbo and examine table in Rumbo’s Details panel.  You can move the slider at the bottom of the table to see that Rumbo has a different column of probabilities for each of Simbo’s behaviors.  The reason for these different columns is that the probability that Rumbo will execute, say, R(4) depends on which behavior Simbo executes.  Let’s consider the example illustrated by the following segment from a different (hypothetical) Rumbo Details table:

 

R(i)

Prob[R(i)|S(8)]

Prob(R(i)|S(9)]

R(0)

0.0107

0.0735

R(1)

0.0969

0.0127

R(2)

0.0918

0.0539

R(3)

0.0663

0.0176

:

 

:

:

 

 

In case you are unfamiliar with the notation, the column headers are read as follows: Prob[R(i)|S(8)] is read “the probability that Rumbo executes behavior R(i) given that Simbo executes S(8)”, and similarly for the other columns.  The part that follows the word given is the specification of a condition that must be true in order for the probability that comes before the word given to be in effect.  That’s why mathematicians refer to these probabilities as conditional probabilities.

Let’s consider some specific examples.  Take a look at R(3).  The table segment shows us that whenever Simbo does S(8), the probability that Rumbo does R(3) is 0.0663, which means that for every 10,000 times that Simbo does S(8), Rumbo will do R(3) approximately 663 times.  Note that Simbo’s execution of S(8) is a necessary condition that must be true in order for this probability to hold.  If Simbo should execute some other behavior, then the probability could change. For example, whenever Simbo does S(9) the probability that Rumbo executes R(3) suddenly changes to 0.0176, which means that for every 10,000 times that Simbo does S(9), Rumbo will do R(3) approximately 176 times.  The function of these conditional probabilities is to represent the idea that Rumbo is responding to Simbo.  Simbo doesn’t have conditional probabilities because Simbo is just marching to its own drummer, but for our purposes of exploring the meaning of the Good-Regulator Theorem we need to represent the idea that Rumbo is responding to Simbo and these conditional probabilities are the right tool for the job.

(Again, remember that the particular numbers referred to here are not important and will change every time you open the Three-Amibos tutorial.  What is important to understand is that the liklihood that Rumbo will execute any given behavior always depends on what Simbo does first.)

This idea that Rumbo is responding to Simbo is also represented in the timing of the animations.  Take a moment to go back and observe them again.  This time, notice that there is a slight delay between the times when each amibo’s behavior begins.  Simbo is the first to move, Rumbo is the next and Zoombo is last.  This difference in timing is meant to illustrate Rumbo’s responsiveness to Simbo and Zoombo’s responsiveness to both Simbo and Rumbo. Of course, this is not the only way we might arrange their interrelationships, but we do need it in order to illustrate and explain the Good-Regulator theorem.

Now, while Rumbo’s table is the most complicated and Simbo’s the least, Zoombo’s is somewhere in between.  In order to understand Zoombo’s table we have to keep in mind two things.  First, Zoombo is not acting on its own.  Zoombo’s behavior is completely determined by Simbo and Rumbo.  Second, any given Zoombo behavior might show up in more than one place on the Game Matrix. Let’s take another look at the sample Game Matrix we examined earlier:

Earlier I pointed out that Zoombo’s behavior Z(94) appears in the columns beneath Simbo’s behaviors S(0) and S(3).  Corresponding to these Simbo behaviors are Rumbo’s behaviors R(3), R(1) and R(5), respectively.  In other words, we can write:

·        S(0) + R(3) = Z(94),

·        S(3) + R(1) = Z(94) and

·        S(3) + R(5) = Z(94).

This means that if we want to calculate the probability that Zoombo executes Z(94), we must somehow account for the fact that there are three distinct ways that Zoombo might do such a thing.    It is somewhat beyond the scope of this tutorial to explain the details of how this is done, but the upshot is that you have to add up the individual probabilities associated with each possibility.  That is:

prob[Z(94)] = prob[S(0) & R(3)] + prob[S(3) & R(1)] + prob[S(3) & R(5)]

This aspect of Zoombo’s probability distribution is explained in the text area of the Details panel and reinforced in the column labeled “Notes” of Zoombo’s table of probabilities.

 

 

 

 

 

Summary: Thus, we have three systems: Simbo, Rumbo and Zoombo. These three systems are interacting so as to make a larger system.  A single behavior of this larger system is comprised of a single behavior from each of Simbo and Rumbo which combine so as to cause Zoombo to execute one of its behaviors.  The rules that dictate how these Simbo and Rumbo’s behaviors combine to produce a Zumbo behavior are completely specified by the Game Matrix.  Another point that we covered is that Rumbo is responding to Simbo.  Rumbo’s responsiveness is represented in two ways: firstly by a table of conditional probabilities (shown in the Details panel of Rumbo’s behavior demo window) and secondly by timing differences in the actual animations (you will not see these timing differences in the magenta-illuminations of the Game Matrix nor in the progress of the transcript).  Zoombo is also responsive, but to both Simbo and Rumbo.  Zoombo’s responsiveness is represented both by timing differences in the animations and especially by the fact that the probabilities of each of its behaviors depend on the probabilities of both Simbo and Rumbo.

Back to top

 

Regulation and Surprise

We have one more important point to cover in our clarification of the idea of Regulation. I explained earlier that in order for Rumbo to regulate a source of obstacles (Simbo) and thus exert control over Zoombo so that Zoombo will execute the behaviors that Rumbo most prefers, Rumbo has to have some way to rank Zoombo’s behaviors.  Based on what we have said so far, there a lots and lots of ways that this might be done.  For example, if Zoombo has 100 behaviors, then you may know from a prior math course that there are 100x99x98x…x4x3x2x1 ways to create such a preference ranking on Zoombo’s behavioral repertoire.  The mathematical shorthand for writing this number is 100! (read “one hundred factorial”).  The exclamation point just means you’re supposed to multiply together all of the integers from 100 down to 1.  In this case the result is so large that it makes my calculator go crazy trying to calculate it (it just displays “Error 2” on the screen).   But as it turns out, there is one special subset of all these ways that we might make a preference ranking that is of very special interest. It has to do the scientific definition of Surprise.[7]

You may have noticed that beneath each amibo’s performance frame is a label that reads “Expected Surprise: …. bits”, where the “…” contains a number.  This might seem like a contradiction because a surprise, by definition, is something that we don’t really expect.  But this contradiction disolves when we recognize that some surprises can actually be more surprising than others.  For example, we would probably be much more surprised by the discovery that the multi-million dollar grand-prize lottery ticket that we bought actually won, than we would, say, by the discovery that the ceiling light in the kitchen was burned out.  This relative “surprise value” of surprises means that we can actually have expectations about the types of surprises that might, in fact, surprise us.

It might also surprise you learn that scientists have actually found a way to quantify all of this and to measure a surprise.  How do they do this?  Well, let’s think about what we mean by a surprise.  First of all, if an event is very rare  like discovering that we own a winning multi-million dolar grand-prize lottery ticket  then whenever it happens we tend to feel a great deal of surprise.  In the extreme, if some event is impossible and yet it happens anyway, we might say that we would feel infinitely surprised.  At the other end, if an event is very common, then we feel very little surprise when it happens.  Again, at the extreme, when an event is certain to happen then we feel no surprise at all.  This discussion suggests a way to get scientific about surprise.  First of all, we can begin with the notion of probability. A probability is always a number between zero and one that describes the likelihood of its occurence.  When an event has a probability of zero it means that the event will never happen.  When it has a probability of 1, it means that the event will certainly happen.  An in between number such as 0.015 means that the event will actually occur approximately 15 times out of every 1000 times that it might occur.  Now, how do we get from probability to surprise? Well, it turns out that there is a very simple and sensible way to do exactly that.  First, you take the probability of the event occuring.  For the sake of argument, let’s suppose it’s 0.015. Then, you walk across the room to your desk to get your calculator and you type in 0.015.  Next you press the key marked log (for logarithm).  When I do that I get -6.05889369.  Finally, you multiply the result by -1, which gives us 6.05889369.  Ta daa!  That is the amount of scientific surprise that is associated with any event that occurs with probability 0.015.  Here is what it looks like when it is written down as a formula:[8]

One additional note: I am assuming here that your calculator’s log key is using the logarithm function of base 2, in which case the above quantity of surprise is measured in bits (short for binary digits).  This is also the unit that is being displayed beneath each amibo’s performance frame.  If your calculator’s key uses the natural logarithm, which usually appears as ln on a calculator, then the units would be nats and the number you would obtain would be 4.19970508.  However, as far as this tutorial is concerned, we will only concern ourself with logarithms to the base 2.

Notice that this way of measuring the surprise of an uncertain event fits perfectly our intuitive analysis of what we want from a measure of surprise.  That is, when the event is impossible, then p = 0.0 and Surprise(0) is infinite because the logarithm of zero is infinity.[9]  On the other hand, when an event is certain, then p = 1 and Surprise(1) = 0, because the logarithm of 1 is zero.  This is not just an arbitrary choice that scientists have made about how to quantify surprise.  The logarithm function actually has a number of other properities that make it perfect for the job, but these are somewhat beyond the scope of this tutorial.[10]

This is how scientists measure the surprise associated with a given event that occurs with probability p.  Now, let’s think about Simbo.  Remember that Simbo’s behaviors are being executed according to a table of probabilities.  Here is an example of such a table:

 

S(i)

Prob[S(i)]

S(0)

0.0374

S(1)

0.0249

S(2)

0.0031

S(3)

0.0654

S(4)

0.0748

S(5)

0.0592

S(6)

0.0966

S(7)

0.0498

S(8)

0.0841

S(9)

0.0903

S(10)

0.0249

S(11)

0.1059

S(12)

0.0592

S(13)

0.0093

S(14)

0.0562

S(15)

0.0966

S(16)

0.0623

Total

 

 

Recall that in the left column of this table are listed Simbo’s behaviors and in the right are listed the various probabilities that discribe how frequently Simbo actually executes these behaviors.  Now, using our formula, we can calculate the amount of surprise associated with each of Simbo’s behaviors.  We’ll add this in a third column (in order to save some space, we’re also going to write pi):

 

S(i)

pi

Surprise(pi) = -1*log[pi]

S(0)

0.0374

4.740818 bits

S(1)

0.0249

5.327710 bits

S(2)

0.0031

8.333516 bits

S(3)

0.0654

3.934566 bits

S(4)

0.0748

3.740818 bits

S(5)

0.0592

4.078259 bits

S(6)

0.0966

3.371833 bits

S(7)

0.0498

4.327710 bits

S(8)

0.0841

3.571750 bits

S(9)

0.0903

3.469130 bits

S(10)

0.0249

5.327710 bits

S(11)

0.1059

3.239226 bits

S(12)

0.0592

4.078259 bits

S(13)

0.0093

6.748554 bits

S(14)

0.0562

4.153286 bits

S(15)

0.0966

3.371833 bits

S(16)

0.0623

4.004624 bits

Total

 

 

 

Looking through the above table, notice how the behavior with the largest probability  S(11)  is associated with the smallest amount of surprise (about 3.24 bits) and that the behavior with the smallest probability  S(2)  is associated with the largest amount of surprise (about 8.33 bits), conforming to our intuitive analysis of surprise.  Now, in addition to this sort of table which shows us in detail the amount of surprise associated with any given behavior, it would be nice to have some way to make an overall summary of these details.  What we would like is one single number that we can use to represent the whole list of numbers.  Now, as it pertains to random events  like Simbo’s behaviors  the mathematical tool of choice for creating such a summary is called the expected value.[11]  For the calculated values of surprise shown above, the way we calculate the expected value is as follows.  We take each value, multiply it by the probability associated with that value and then add up all of these products.  This is done in the following table:

 

S(i)

pi

-1*log[pi]

pi*(-1*log[pi])

S(0)

0.0374

4.740818 bits

0.177307 bits

S(1)

0.0249

5.327710 bits

0.132660 bits

S(2)

0.0031

8.333516 bits

0.025834 bits

S(3)

0.0654

3.934566 bits

0.257321 bits

S(4)

0.0748

3.740818 bits

0.279813 bits

S(5)

0.0592

4.078259 bits

0.241433 bits

S(6)

0.0966

3.371833 bits

0.325719 bits

S(7)

0.0498

4.327710 bits

0.215520 bits

S(8)

0.0841

3.571750 bits

0.300384 bits

S(9)

0.0903

3.469130 bits

0.313262 bits

S(10)

0.0249

5.327710 bits

0.132660 bits

S(11)

0.1059

3.239226 bits

0.343034 bits

S(12)

0.0592

4.078259 bits

0.241433 bits

S(13)

0.0093

6.748554 bits

0.062762 bits

S(14)

0.0562

4.153286 bits

0.233415 bits

S(15)

0.0966

3.371833 bits

0.325719 bits

S(16)

0.0623

4.004624 bits

0.249488 bits

Total

 

 

 bits

 

This number that we calculated  3.857764 bits  is the expected surprise associated with Simbo’s behaviors and it is used to represent the entire list of individual surprise values associated with each of Simbo’s behaviors.  Since each of the amibos has a probability distribution, we can calculate the expected surprise for each amibo and this is exactly the number that is displayed beneath each amibo’s performance frame.  Note that Rumbo’s conditional probabilities make the calculation of its average expected surprise somewhat more complicated than the calculation of Simbo or Zoombo’s expected surprise.  We won’t go into those details here, but the gist of the calculation remains the same.  The number that results represents the entire list of surprise values calculated for each of Rumbo’s behaviors.

Now, I started this discussion of surprise by saying that there was one particular subset of preference rankings that is of special interest to our discussion of regulation and that it had something to do with this notion of surprise.  Here it is: there is a subset of preference rankings that have the effect of removing or at least minimizing the expected surprise associated with Zoombo’s behaviors.  In other words, whenever Rumbo responds to Simbo with respect to one of these “special” preference rankings, the effect is that Zoombo’s surprise is reduced to a minimum.  Ideally it would remove all of the surprise associated with Zoombo, but this is not always possible given three arbitrary amibos.  However, whenever the surprise cannot be driven to zero, it can be forced to be as small as possible, provided Rumbo is equipped with one of these special preference rankings.

Summary: key to any process of regulation is that the regulator is somehow analyzing the options before it and selecting the “best”, where “best” is defined by some sort of preference ranking over the set of all possible outcomes. Although this is true in general, there is a particular type of preference ranking that is of special interest to us here.  This type of preference ranking has the property of reducing the expected surprise associated with the outcomes produced.

 

This concludes our examination clarification of the idea of Regulation. Let’s move on to the idea of a Model.

 

Back to top

Models

The word model can be defined in various ways and the Good-Regulator Theorem makes use of its own particular way of defining it.  Before we get to that definition, let’s take a look at how the Wiktionary defines it:

model (plural models)[12]

1.   A person who serves as a subject for artwork or fashion, usually in the medium of photography but also for painting or drawing.

The beautiful model had her face on the cover of almost every fashion magazine imaginable.

2.  A miniature representation of a physical object.

The boy played with a model of a World War II fighter plane.

3.  A simplified representation used to explain the workings of a real world system or event.

The computer weather model did not correctly predict the path of the hurricane.

4.  A style, type, or design.

He decided to buy the turbo engine model of the sports car.

5.  The structural design of a complex system.

The team developed a sound business model.

6.  A praiseworthy example to be copied, with or without modifications.

British parliamentary democracy was seen as a model for other countries to follow.

7.  (logic) An interpretation function which assigns a truth value to each atomic proposition.

 

 

If you examine each of these definitions, one thing you will notice is that the word model is actually an abbreviation for “model of X”, where the X is whatever the model represents.  Let’s review the examples of models given in the above definition and make this explicit:

1.        The beautiful model [of what you could look like if you imitate her] had her face on the cover of almost every fashion magazine imaginable.

2.       The boy played with a model of a World War II fighter plane.

3.       The computer weather model [of the weather] did not correctly predict the path of the hurricane.

4.       He decided to buy the turbo engine model of the sports car.

5.       The team developed a sound business model [of the business].

6.       British parliamentary democracy was seen as a model [of democracy] for other countries to follow.

7.       [No example is given, but the principle in question still applies in the sense that the set of truth values referred to compose a model of the set of atomic propositions].[13]

 

 

The point here is that the idea of Model is essentially a relation between two things whereby one of them (the model) is supposed to represent the other.  This representational aspect is one thing that all the definitions have in common.  Perhaps the main way that they differ can be seen in the precise nature of that representation (how it is accomplished, what it is used for, etc.)

We too will use a definition of model that relies on this idea of representation, although it boils that idea down to its most abstract essential which is one of blunt association.   

At the foundation of any sort of representation is the idea that at least two things are being associated with one another in some way.  The exact nature of that association is more or less irrelevant and as long as there is some sort of an association then we can effectively use one of the things to represent the other.  The two things may be glued together, tied together or just located in the same vicinity.  Perhaps they share a common color, size or weight.  Maybe they are habitually used to together or perhaps one is a natural consequence of the other.  And the association can even be as arbitrary as that formed when two people simply agree that the salt and pepper shakers represent line-backers while the ketchup represents the quarterback.  However the association is established, as long as two things are associated we can use one to represent the other.

But that is only part of our definition of model. In addition to this idea of blunt association, we will also allow for the possibility that the representational relationship between the two things could be a kind of one-way street.  That is, it might be the case that while X represents Y, Y does not represent X.  For example, we readily think of a toy car as a model of a real car, but it seems a little odd to go the other way around.  Not that we can’t use a real car as a model for a toy car (toy designers to exactly that), but it’s just not what we usually mean.  On the other hand, with a set of identical twin brothers, say, either could be considered as a model for the other (perhaps in some medical procedure). 

So, in addition to blunt association, our definition of model will allow for the possibility of two-way representation, but it will not require it. 

Mathematicians have a simple tool that they use to accomplish both of these tasks: a mapping.  The word function is also used to label this conceptual tool, but we will stick with the word mapping. The idea is very simple.  First of all, a mapping is always between two sets, although the two sets might actually be the same set.  To illustrate the idea let’s consider two actual sets; let’s call them set A and set B. Let’s suppose the set A is comprised of a pencil, a baseball and a paper clip.  I purposely chose these objects to illustrate that a mapping can be set up between any two sets, regardless of what they contain.  Next, let’s suppose that the set B is comprised of every card in a standard deck of 52 playing cards.  I chose a deck of cards because I want to show that a mapping doesn’t require that the sets have the same number of elements or even close.

Now, here is how we set up a mapping from the set A to the set B.  First of all we can name the mapping.  We can name it anything we like, but let’s name it m.  Next, we can use the following mathematical notation to symbolize the fact of the mapping:

That symbol is read “m is a mapping from the set A to the set B”.

Finally, we take each element in A and we associate it with an element in B.  How do we accomplish this association?  By any means we wish.  The nature of the association is not important to the idea of mapping, only that we set up some sort of association.  One easy way to accomplish such an association is just to list the elements of A along with the associated element from B.  For example:

 

A

B

Pencil

Jack of Clubs

Baseball

Three of Diamonds

paperclip

Seven of hearts

 

Now we arrive at the point of this whole discussion.  Once such a mapping has been established from the set A to the set B, we will say that B is a model of A.

Now, at first blush this might seem like a far cry from what we mean when we talk about model planes or weather models, but a closer examination reveals that it does, in fact, capture the essentials.  What is essential about a model airplane is that there is, firstly, an obvious association between the set of model airplane parts and the set of real airplane parts.  Secondly, the representational relationship between the two is basically one-way.  It is the little airplane that is the model of the big airplane.  Of course, nothing says that we cannot use the real airplane as a model for the toy, but the one-way nature of a mapping allows us to represent the common one-way nature of representation.

Now, the above example uses only three cards out of 52 and you might be wondering what role is played by the 49 other cards in the set B.  Well, when you think carefully about any model, you will notice that there are often parts of the model that don’t seem to be doing any representational work.  A model airplane, for example, might show seams where the parts of the model are glued together.  What do these seams represent on the real airplane?  Nothing.  They do no representational work whatsoever and are just an extraneous part of the model, just as the other 49 cards in the deck are a (rather large) extraneous part of the model we considered above.

Finally, let’s suppose we do want the representation to go the other way.  Can we use our set A as a model of B?  The answer is yes, but since A has only 3 elements we will have to re-use at least some of these elements to make up the difference.  Calling our new model n, we might set up the following association table:

 

 

B

A

Ace of Hearts

Pencil

Two of Hearts

Pencil

Three of Hearts

Pencil

:

:

Queen of Hearts

Pencil

King of Hearts

Pencil

Ace of Clubs

Baseball

Two of Clubs

Baseball

Three of Clubs

Baseball

:

:

Queen of Clubs

Baseball

King of Clubs

Baseball

Ace of Diamonds

Paperclip

Two of Diamonds

Paperclip

Three of Diamonds

Paperclip

:

:

Queen of Diamonds

Paperclip

King of Diamonds

Paperclip

Ace of Spades

Paperclip

Two of Spades

Paperclip

Three of Spades

Paperclip

:

:

Queen of Spades

Paperclip

King of Spades

Paperclip

 

Once again, having set up this association, we can say that A is a model of B. One thing that is obvious about this example is that it is quite “clumpy”.  We might say that the model lacks detail or resolution.  In our model, every heart card is represented by the pencil, every club is represented by the baseball and the rest of the cards are all represented by the paperclip. Actually, what we have illustrated here is something that is much closer to what we usually mean when we think of a model.  Consider a toy car.  One characteristic of such a model is that it lacks a great many details.  Maybe the dashboard is just a piece of piece of plastic with some circular and rectangular shapes that represent the various dials and control panels of a real dashboard.  The engine is just another block of plastic with lumps and irregularities that are supposed to represent true engine parts.  Maybe there’s just empty space where the gas tank is supposed to be, etc.  When we look at the toy, we don’t have access to all of the information that we might have if we were looking at a real car.  This is similar to our current example in which we are using the set A as a model of B.

Summary: although various definitions of the word model are in use, they are all grounded in the idea of a mapping that is set up from the set of attributes belonging to the thing modeled to the set of attributes of the thing that is the model. Sometimes this mapping results in the model bearing a strong visual resemblance to what it represents, as is the case with model airplanes, but this visual resemblance is not the only way to establish such a mapping and equally acceptable are purely arbitrary mappings, as when salt and pepper shakers are used to represent football players during a lunchtime recap of a recent football game.  As long as there is some sort of mapping from the set of A-attributes to the set of B-attributes, we will say that B is a model of A.

 

This concludes our discussion of models.  We now have everything we need in order to achieve a much richer understanding of the Good-Regulator Theorem.  Let’s move on to part 2.

 

Back to top

Part 2: What The Good-Regulator Theorem Tells Us About How The World Really Works

 

It’s time to get to the whole point of this tutorial which is to bring your pre-existing day-to-day and mostly intuitive understanding of the Good-Regulator Theorem to the kind of rich, high-level understanding that can be really useful to you in your most ambitious and complex pursuits.  In part 1 of this tutorial we developed the conceptual tools that we will need to accomplish this.  Now we will put those tools to work.   Let’s begin with the way Conant and Ashby paraphrased their formulation of the theorem: “every good regulator of a system must be a model of that system.”

Using the conceptual tools we developed in part 1 in conjunction with our amibo friends, we are now ready to see the true meaning of the Good-Regulator Theorem.  Let’s start this discussion with the amibos. In your exploration of the Animation Panel you certainly wondered about the big button that lies in the center of that panel and which is labeled “Adapt Rumbo To Simbo”.  Take a moment to flip over to the Animation Panel now and press that button.  When you do so the animation will stop (if you didn’t already stop it with the Stop button) and a sub-routine will analyze the game matrix and figure out how to arrange Rumbo’s probability distribution so that Rumbo will respond to Simbo in such a way that the Zoombo’s expected surprise drops to a minimum.  Remember that this is what we mean when we say that Rumbo is regulating Simbo.  Actually, for the purposes of this tutorial the Game Matrix has already been specially designed so that it’s possible for Rumbo to completely eliminate Zoombo’s expected surprise and drive it all the way to zero, but this does not necessarily have to be possible in order for the Good-Regulator Theorem to hold.  It does, however make for a nice, clear demonstration.

Once the sub-routine has completed it will cause a little window to open that says “The search has completed.  The good-regulator model has been installed.”  This message is accompanied by an “OK” button that you can push to close the window.  What will also happen is that the actual good-regulator model (mapping) will suddenly be displayed in a long window to the right of the Animation Panel.  The title at the top of the window will say “Rumbo as a model of Simbo” and right below that title you will see a mapping table of the type we considered in our discussion of models.  Here is an example of what you will see:

 

 

 

What we see displayed here is a mapping from the set that comprises Simbo’s behavioral repertoire, which we might refer to as S, to the set that comprises Rumbo’s behavioral repertoire, which we can call R.  If we name this mapping g (for “good-regulator”) then we can symbolize the above mapping as .  As before, whenever we have such a mapping, we can say that R is a model of S. In other words, Rumbo is a model of Simbo.

Notice that this mapping displays the type of “clumpiness” that we discussed in the previous section on models.  Remember that this sort of clumpiness  though not a necessary attribute of a model  is one that often shows up in models.  There are two reasons for this clumpiness.  The first reason is that the mathematical definition of a mapping requires that every element in S be associated with some element in R. That is, each of Simbo’s behaviors must have a representative among Rumbo’s behaviors.  The second reason is that Simbo has more behaviors than Rumbo and so in order to fulfill the definition of a mapping, at least some of Rumbo’s behaviors have to be used more than once in the mapping.  Returning to the example of the toy car that we discussed earlier, the little plastic panel behind the toy steering wheel is all we have available to represent all of the dials and controls that show up on the real car’s dashboard.  In this case we are mapping the whole clump that consists of all of the dials and controls of the real dashboard onto the single strip of plastic that is the toy dashboard.  Likewise, looking at the example shown above, the clump of Simbo’s behaviors S(7), S(8) and S(13) all map onto Rumbo’s single behavior R(4).  Just as we can say that toy dashboard represents the real dashboard, we can say that Rumbo’s behavior R(4) represents all of Simbo’s behaviors S(7), S(8), and S(13).  If you flip back over to the Animation Panel and move the mouse cursor over any Rumbo cell in the orange “model” panel, a little tool-tip text will appear to explain which of Simbo’s behaviors are being represented by that particular Rumbo behavior.

At this point you should flip back over to the Animation Panel to observe how the behavior of the amibos has changed since you pressed the “Adapt Rumbo To Simbo” button. Be sure to notice the following points:

·        Zoombo is now doing the same behavior, over and over, no matter what Simbo is doing.  Because Zoombo is now executing the same behavior over and over again, there is no more surprise associated with Zoombo’s behavior and the value shown for Zoombo’s expected surprise is zero. 

·        Simbo is still executing behaviors according to its original probability distribution.  There has been no change in this regard.

·        The reason that Zoombo keeps repeating the same behavior is that Rumbo is now responding to Simbo so as to always pick out the same behavior from the options that Simbo makes available to Rumbo. (The Game Matrix cells that correspond to this repeated Zoombo behavior are now illuminated in orange, the same color of the panel that displays the good-regulator mapping.)  This way of responding to Simbo is what we mean when we say that Rumbo is regulating Simbo in order to control Zoombo.

·        The magenta-illuminations in the Game Matrix correspond to those displayed in the good-regulator mapping to the right.  For example, when R(2) is magenta-illuminated in the Game Matrix it is also magenta-illuminated in the good-regulator mapping.  When S(6) is illuminated in the Game Matrix it is also illuminated in the good-regulator mapping, etc.

 

An important subtlety to notice here is that despite what the Good-Regulator theorem appears to be saying, Rumbo does not actually have to visually resemble Simbo either in whole or in part, although the possibility of such resemblance should not be excluded.  You certainly won’t see such resemblance in the amibo animations.  Rumbo will not start to “look like” Simbo just because Rumbo is regulating Simbo.  To say that Rumbo is a model of Simbo simply means that Rumbo’s behaviors correspond to Simbo’s behaviors according to some mapping .  A more colloquial paraphrase of this is “Rumbo is a model of Simbo in the sense that Rumbo’s behaviors are Simbo’s behaviors as seen through a mapping .”

Another thing you should do is open up a behavior demo window for Rumbo and check out what has happened with Rumbo’s probability distribution.  Below is an example of what you will see:

 

In the above image the column dividers and view sliders have been set so you can see that for the S(3) column, all of the probability is concentrated on R(11). This means that whenever Simbo executes S(3), Rumbo will always execute R(11) (when the probability of an event is 1 then it is certain to occur).  This is a behind the scenes view of how Simbo’s behaviors are getting mapped onto Rumbo’s behaviors.  The mapping association is established by setting to 1 the conditional probability that Rumbo executes R(11) given that Simbo executes S(3) and setting to zero all of the other conditional probabilities in that column of the table. You should open up your own Rumbo behavior demo and examine this more closely.  You will see that every column is like this (although the particular Rumbo behaviors that receive the 1-probabilities will most likely be different).

Another subtlety here which will not always be apparent in the animations is that because it is entirely possible that the same Zoombo behavior could appear more than once in a given Simbo column, for any given Simbo behavior, Rumbo might actually have at least two ways to make Zoombo do the same behavior.  (This was exactly the case we considered earlier when we saw that Zoombo’s Z(94) behavior appeared twice in the column beneath Simbo’s S(3) behavior.)  Whenever this occurs Rumbo could actually achieve the same minimum value for the expected surprise without having to behave according to a mapping.  Remember that a mapping requires that we associated just one of Rumbo’s behaviors to any given Simbo behavior, but if Rumbo’s goal shows up twice in the same column then Rumbo could achieve that goal with two different behaviors and this would disqualify our mapping as a mapping.  In this case Rumbo would no longer be a model of Simbo, although the same minimum value of surprise would be attained.  In order to resolve this difficulty, Conant and Ashby specify the economically reasonable assumption that a “good-regulator” not only achieves the lowest amount of surprise, but that it also does this as simply as possible.  In other words, a truly good regulator is not just an optimal one, but is also maximally simple.  Thus, even though Rumbo could choose between its various ways to achieve the same goal, when it is respecting this simplicity condition then it just picks one of these ways and always does that.  Rumbo is a “good-regulator” when it both minimizes the expected surprise in Zoombo’s behaviors and when it does so as simply as possible.

You should also open a behavior demo for Zoombo and examine how its probability distribution has changed.  What you will see is that all of the probability is concentrated on the single behavior that Zoombo is repeating.  All of the other Zoombo behaviors have probability zero.  This brings us back to the idea of a preference-ranking discussed in the section on Regulation.  In this case, Rumbo’s most preferred outcome is to get Zoombo to always do which ever behavior has been given probability 1.  As far as the preference ranking is concerned, we are free to arrange the remaining behaviors however we wish because with the given type of Game Matrix Rumbo will always succeed in attaining its most preferred outcome.

Once again I should point out that we have been examining the very special case in which Rumbo’s most preferred Zoombo behavior shows up at least once in every Simbo column.  In the real world this will not always be the case which is why we need the preference ranking.  In the more general case, in order to be successful, Rumbo needs to know what to do when its favorite Zoombo behavior is not available.

 

 

Proving the Theorem 

The Good-Regulator Theorem is exactly that  a theorem  and theorems must be proven rigorously.  Up until now, all we have done is examine in detail what the Good-Regulator Theorem establishes: that the simplest, optimal regulator of a system must be a model of that system.  It is time to consider the proof of this assertion.

Conant and Ashby’s original proof of the theorem requires some mathematics that are a good deal beyond the scope of this tutorial.[14]  Fortunately, theirs is actually a special case of a much more general theorem, the proof of which is almost trivial.  It is accomplished as follows:

The argument begins by defining an “optimal” regulator (Rumbo) as one that always responds to the system’s behaviors (Simbo’s behaviors) so as to respect some pre-established preference ranking defined over the outcomes (the behaviors of Zoombo).  Now, given that the regulator (Rumbo) does this then at most one outcome will be produced in response to any given system behavior.  This is true because each time the system executes a given behavior, the preference ranking imposes an order on the outcomes that are made available to the regulator by the execution of that system behavior and this means that there will always be exactly one outcome that will be “best” according to that preference ranking.  And although it could be the case that the regulator could have two or more responses that might produce that one particular outcome (because that outcome might appear two or more times in the same column of the Game Matrix), if we impose the economically reasonable assumption that the regulator produces this outcome in the simplest manner possible, then the regulator will always use the same behavior to produce that outcome.  When the above conditions have been fulfilled, the obvious result is that each system behavior will be permanently associated with some one regulator response, thus creating a mapping from the behaviors of the system to those of the regulator.  This mapping makes the regulator a model of the system.

The above proof is extremely simple and almost trivial  so much so that it barely deserves to be called a theorem.  Conant and Ashby’s approach, on the other hand, is far less obvious and thus, for that reason at least, much more interesting.   On the other hand, Conant and Ashby’s approach requires much more mathematical training to understand.  Their approach differs from the above only in the definition of “optimal” regulation.  They still require a preference ranking, but they require a particular type of preference ranking  one whose ultimate result produces outcomes with the lowest attainable expected surprise.[15]  This special case of “optimal” regulation is truly a special case indeed because it captures the idea of stability, which is so crucial to the integrity of systems in general.  By ignoring the stability component, as we do in the above proof, we end up with a lot of so-called “succesful” regulators that could wind up destroying the integrity of the systems they are supposed to regulate.  Although this would be unacceptable in any practical sense, it makes the proof of the Good-Regulator Theorem much easier for a lay-person to follow.

But the above approach has more than just pedagogical value.  Conant and Ashby’s version of the Good-Regulator theorem focuses on a highly idealized entity  an optimal regulator of a system, which is to say one that achieves the maximum amount of stability for a given Game Matrix.  Although such a device is highly interesting from a theoretical standpoint, the truth is that such a perfect device is probably only rarely, if ever, found.  Much more likely are very good but still not perfect regulators.  The utility of our simpler and more general verson of the Good-Regulator Theorem is that it treats all of these cases as well.

 

 

Summary: The Good-Regulator Theorem tells us that “every good system regulator must be a model of the system it regulates.”  In terms of our amibos, this means that if Rumbo is responding to Simbo so as to minimize the expected surprise of Zoombo’s behaviors, and if Rumbo is doing this as simply as possible, then it must be true that Rumbo is a model of Simbo in the sense that Rumbo’s behaviors are Simbo’s behaviors as seen through the mapping .

Back to top

Part 3: Putting It All Together

By now you have a much richer and more high-level understanding of the Good-Regulator Theorem.  Still, we need to examine more specifically how it relates to us all as people who might not work for NASA.  Toward that end, let’s review a number of key insights discussed in this tutorial:

·        Your life is a system  we can call it your life-system,

·        You have to regulate that system,

·        In order to optimally regulate your life-system in the simplest way possible, you must become a model of that system.

That last statement is not as odd as it might sound.  It sounds a little odd because so many of the things in our daily lives that we refer to as models bear a strong visual resemblance to what they model  boats, cars, houses, etc.  but in our current context the word is being used much more generally and certainly does not imply any sort of visual resemblance (nor does it exclude it either, for that matter).  It certainly doesn’t mean that you have to come to “look like” your life system in any visual sense.

What it does mean is that to the extent that you wish to optimally regulate your life-system in the simplest way possible, you must come to represent your life in the same way that the following string of words “The parrot ate the peanut” represents an event in which an actual parrot ate an actual peanut.  Clearly there is nothing about that string of words that happens to look anything at all like a parrot eating a peanut, yet, under our current interpretation of the idea of Model we can say that the string of words is a model of the event it describes.  It is a model by virtue of the mapping that has been set up between the elements in the actual parrot-peanut system that produced the event in question and the string of words that represents that event: the word parrot is mapped to (associated with) the actual parrot, the word ate is mapped to the act of eating and the word peanut is mapped to the actual peanut.

To clarify how all this relates to you and your life-system let’s use our amibos to illustrate the three insights listed above.  First of all, let’s imagine that Simbo is your life-system.  Just as Simbo has behaviors, your life-system can be seen to have behaviors in the sense that it can change from one state-of-affairs to another  the weather can change; the political climate can change; the economy can change; the people can change, etc.  The net result of all of these state-changes  these life-system behaviors  is that at any given moment you have before you a huge menu of possibilities  a giant list of options.  The most common way to say it is that you are in a particular situation.  This is exactly what happens with Simbo and Rumbo.  Each time Simbo executes one of its behaviors, the result is that Rumbo finds itself in a particular situation, meaning that Rumbo is presented with a list of options  these are, of course, just the accessible Zoombo behaviors that Rumbo can bring about.

Next, in this scenario, Rumbo represents you.  Like Rumbo, you also have a number of behaviors at your disposal. Furthermore, just as Rumbo executes its behaviors according to a table of conditional probabilities, what you do amounts to the same thing.  For example, whether you execute your “lunch eating” behavior depends on various conditions ranging from the time of day to whether there is any food to eat, etc. The probability that you will eat lunch at noon is much higher than it is at 11:00 and higher still than it is at 4:00 am.

Furthermore, the result of combining any given one of your behaviors with any given one of your life’s behaviors is that some particular outcome will be produced. These outcomes that are actually produced correspond to the behaviors that Zoombo actually executes and the set of every possible such outcome is what we can consider to be Zoombo’s complete behavioral repertoire. 

Now, to review, the Good-Regulator theorem tells us that whenever Rumbo is regulating Simbo so as to fully minimize in the simplest way possible the expected surprise in Zoombo’s behaviors, it must be the case that Rumbo is a model of Simbo in the sense that Rumbo’s behaviors are Simbo’s behaviors as seen through some mapping .  As this applies to your life, the theorem tells us that whenever you are regulating your life-system so as to fully minimize in the simplest way possible the expected surprise in the outcomes you obtain, it must be the case that you are a model of your life-system.  Again, you are a model of your life-system in the sense that your behaviors are your life-system’s behaviors as seen through some mapping .

Once again, this does not necessarily mean that you somehow resemble your life-system in the visual sense any more than salt and pepper shakers resemble football players during a lunchtime recap of last Sunday’s Superbowl game.  What it means, quite simply, is this:

Whenever your life-system repeats any given behavior x, you always do the same thing.

So, for example, if your mapping g maps your behavior y to life’s x, then whenever your life does x, you do y.  Of course, when you create this reliable sort of association between y and x you turn y into a representation of x and when each of your life-system’s behaviors is mapped to one of yours in this fashion you become a representation (model) of your own life-system. Another way to say it is:

Whenever you find yourself in a given situation, you always do the same thing.

To sloganize it as a bumper-sticker we might say “The same situation evokes the same response.”  Yet another way to say it is that you have to become very predictable.  To become a model of your own life-system  i.e. to always behave according to some mapping  is to become as predictable as a machine.  Anyone with access to the mapping g can always predict what you are about to do from knowledge of the situation you are in. 

So in a nutshell, and as seen from a high-level, this is what the Good-Regulator Theorem has to do with your life.  Simply put, it describes the way you must be behaving whenever you are successfully minimizing the amount of surprise in the outcomes you bring about.  Take a moment to flip back to the Animation Panel again to observe Rumbo’s responses to Simbo (I’m assuming that you have pushed the “Adapt Rumbo To Simbo” button and thus found and installed the good-regulator model).  What you will see is exactly what I am describing. Whenever Simbo repeats a behavior  i.e. whenever Rumbo is confronted with the same situation  Rumbo always does the same thing.  This is absolutely necessary if Rumbo is to minimize, as simply as possible, the expected surprise in Zoombo’s behaviors.  Back in the Introduction to this tutorial I encapsulated all of this into the following paraphrase of the Good-Regulator Theorem which will now make a lot more sense to you:

To the extent that you wish to optimally regulate in the simplest way possible any system that confronts you with a set of distinct, recognizable situations, then whenever you are confronted with any given one of those situations, you must always do the same thing. Conversely, to the extent that you don’t always do the same thing in any given situation then either you will cause instability or else there will be a simpler way to cause the stability you have managed to achieve.

Now, at this point you may be questioning the value of all this.  Maybe it strikes you as too simple, even trivial.  “That’s it?  Whenever I’m in the same situation, I always do this same thing?  That’s the so-called conceptual power-tool I have worked so hard to understand?”

Well, in once sense, yes, that is it. And yes, it is simple.  But I assure you it is anything but trivial.  To see why with the help of a metaphor, let’s recognize that a single brick is also very simple, but when we have enough bricks we can arrange them together to create some magnficent structures.  But we don’t need the metaphor to make the point.  The simplicity of the Good-Regulator Theorem derives directly from the simplicity of the mathematical idea of a mapping (a.k.a function).  This idea of a mapping is grounded in the idea of some sort of association between any two given things (physical or conceptual).  Now, as far as ideas go, this is just about as simple as they come.  But despite its simplicity  or perhaps because of it  the idea of such an association has been used (like a conceptual powertool) to construct much of the vast cathedral of modern mathematics.

I also explained in the Introduction that the Good-Regulator Theorem is a direct cultural analog to the Crick and Watson double-helix model of DNA.  Recall that in that model of DNA we have just four molecules A, T, C and G (for Adenine, Thymine, Cytosine and Guanine respectively). Each of these molecules can form a strong bond with just one of the others so that the four molecules actually become just two basic components: The A-molecule can form such a bond with T to form an A-T component and the C-molecule can form such a bond with G to form a C-G component.  Now, this is a relatively simple set of basic building blocks out of which is constructed all of the unimaginably complex varieties of life on Earth.  And in an analogous way, all of the unimaginably complex varieties of habits, routines, rituals, and skills (along with the multitudinous artefacts needed to perform behaviors) that compose the entire Universe of Human Culture are built entirely from this one particular type of decision: the decision to always do the same thing in a given situation.

So, is it simple?  Yes. Is it trivial?  Absolutely not. 

But then, maybe you just like surprises.  Perhaps the idea of being so predictable strikes you as a bit dull.  Maybe it occurs to you that you don’t really want to regulate your life.

Well, again, before you give up on the Good-Regulator Theorem, let’s stop to consider that no matter how much you may think you like surprises, there are lots and lots of surprises you would much rather do without.  You’d probably like to avoid a surprising loss of a limb or a surprising world-wide famine.  I doubt you’d ever enjoy a surprising brain tumor or a surprising shift in global temperature.  You surely wouldn’t like it if your boss surprised you by firing you or if the stock-market took a surprising nose-dive.  It is another general principle of the Universe that there are many, many more ways that something can go wrong rather than right.  When you think about it a bit  life’s little pleasant surprises notwithstanding  there is really a great deal about your life that you are perfectly happy to keep completely and utterly predictable.  The word of choice here is stability.  To reduce surprise is to increase stability, and vice versa.  Here are just a few common examples of things most people would like to keep stable in their lives:

·        A food source,

·        Supply of breathable air,

·        Drinkable water,

·        Place to live,

·        Sleep,

·        Medical services and treatments,

·        General health,

·        Exercise,

·        Relaxation,

·        Wardrobe,

·        Family relationships and friendships.

  Of course, the one thing that just about everybody wants to keep stable and predictable is their access to money. Although unpredictable increases in one’s money supply might seem to be a good thing, such decreases can be disastrous.  Furthermore, even unpredictable increases can be troublesome. Many grand-prize lottery winners are unequipped to handle such huge amounts of luck and after a few years of wild spending find themselves not only broke but in deep debt.[16]  Also, it is something of a frustrating financial truism that any financial instrument that might increase in value might just as easily decrease in value.

I hope these arguments have made the point that surprise is often a burden (we might call it the elephant of surprise) and that stability is mostly a good thing.  With respect to all of the surprises that life might offer you, life’s pleasant surprises are a relative minority.  Furthermore, these sorts of pleasant surprises  a new shirt, a new friend, a creative insight  are really only enjoyable if they take place within a much larger context of stability. A new shirt is weak consolation when you get laid off from work.  A pleasantly surprising stretch of sunny weather is hard to enjoy when you’ve just learned that you have Parkinson’s disease.  So, however boring it might sound to be predictable in this way, the benefits  good health, steady paycheck, fulfilling relationships, etc.  are so appealing that it makes that so-called “boring” predictability well worth the occasional yawn.

Furthermore, let’s remember that such predictability is fundamentally a choice  at least for us human beings.  As pointed out in the Introduction, our Free Will  i.e., our freedom to surprise ourselves and others or to choose not to by simply doing what we have always done  is one of the great themes of Religion, Art, Literature and Philosophy and is sometimes seen as humanity’s greatest gift and sometimes its curse.  Gift or curse it is certainly the cornerstone of our ability to adapt and thrive in new environments.  Thanks to the fact that we have Free Will we are never absolutely obligated to be so predictable, unless we wish to stablize the outcomes we achieve in the simplest manner possible. But if, for some reason, we can tolerate or even enjoy the consequences of our own unpredictability, then we can just stop being so predictable.

In any case, to the extent that we wish to stablize in our lives all of the varied and variable preconditions to health, wealth and happiness, and the extent to which we wish to do that as simply as possible, it would be useful for us to understand how to cause that highly efficient sort of stability.  And therein lays a good deal of the value of the Good-Regulator Theorem.  The theorem tells us what we must do in order to stablize the outcomes we achieve as simply as possible: we must become representations of our life-systems, in the sense that our own behaviors must be our life-system’s behaviors, as seen through some mapping .

But this realization raises another question: which mapping?  How should we map life’s behaviors onto our own, that we may become models of our own life-systems?  When situation x arises, should I do y or should I maybe do w instead?

This is not an easy question to answer, largely because there is no single mapping g that will work for everyone, everywhere and at all times.   Each person has to find his or her own specific answer to that question, given their specific circumstances.  The good news, though, is that although a truly precise and specific answer to this question is different for each person and circumstance, all of these specific and different answers actually have a good deal in common.  DNA offers us another good analogy here.  Each person’s particular DNA code is unique, but the vast majority of it is the same for all of us and is basically what links us all together as a single species.  The relatively few differences are certainly important, but no where near as important as the similarities.  

So, let’s see if we can get a handle on what this mapping g might look like, at least in the common ways that apply to all of us.  In other words, let’s see how we might all wish to be predictable.  We will consider this question from two perspectives.  The first we might call the “expertise perspective” and we will take this angle throughout the rest of this part of the tutorial.  In Part 4 will examine this question from the second perspective.

 

Regulation, Expertise and the Learning Imperative

To examine this question from the expertise perspective, we can begin by observing that the type of predictability we are considering absolutely requires the following:

1.       Recognition: we have to be able to recognize a given situation for what it is.

2.     Knowledge: we have to know what behavior to execute in that situation.

3.     Competence: we have to be able to actually execute that behavior.

Each of these is absolutely necessary to achieve the kind of predictability we are discussing.  The recognition is necessary because without it you run the risk of violating the mapping g and doing some other behavior than the one required by the mapping.  The knowledge is necessary because it is nothing more or less than a particular association in the mapping g between a given life-system behavior (situation) and a particular you-behavior.  And the competence is necessary because to lack it just means that you don’t actually have the behavior in your behavioral repertoire  you simply cannot do what you cannot do.

These three minimum and very general requirements for predictability  Recognition, Knowledge and Competence  collectively amount to what we might think of as a fundamental unit of expertise.  Of course, I am using the word expertise, here, in a very broad sense to apply as much to the mundane and trivial  as when I recognize that my shoe is untied, know that a shoe-tying skill is needed, and have the competence to execute that skill  to the worldly and significant  as when a heart surgeon recognizes that his patient has heart disease, knows that a triple bypass is needed, and has the competence to perform the operation.

Given this interpretation of expertise, the conclusion we can draw from this discussion is that if you wish to simply and optimally regulate your life system in order to obtain the outcomes you wish to experience (i.e. if you want to get what you want out of life as simply as possible), then you must acquire expertise.  Specifically, you must learn to recognize situations for what they are, you must learn which skills are required in those situations, and you must learn to perform those skills.  In other words, you are going to have to do some learning.

And this brings us to a new question: which expertise?

Before we try to answer this question, let’s remember that the Good-Regulator Theorem has its uses and its limitations.  It certainly isn’t the only conceptual power-tool we need to be successful in life.  It helps us to see that we must acquire some sort of expertise if we are to regulate our life-systems, but it is stubbornly silent (for the most part) as to which expertise we must acquire.  On the other hand it does frame and articulate the question and most importantly, it shows us just how important it is to answer it  it is absolutely essential to stable well-being. Like it or not, by hook or by crook, some how or another, we must identify the expertise we need and then we must acquire it.

In this sense, then, the Good-Regulator Theorem can be seen as a learning imperative or perhaps a “Fundamental Theorem of Liberal Education”.  By emphasizing the utter importance of learning while refusing (mostly) to tell us what to learn, it throws open the floodgates of curiosity, hands us a kayak and a paddle and urges us to plunge right in for the ride of a lifetime. What should we learn?  Anything and everything.  Learn it all, the theorem seems to say, if you can: History, Biology, Spanish, Drawing, Cooking, Fly fishing, How to recognize a rare coin, how to play the piano, juggling, how to play chess, the Heimlich manoeuvre, ballet dancing, how to make espresso coffee, how to sharpen a pencil, soccer, Japanese, how to dress a wound, Geography, how to make a paper airplane, how to make knots, how to grow orchids, pottery, knitting, how to make a French braid, textile weaving, brain surgery, how to write a business plan, computer programming, how to breed prize winning sheep, how to do CPR, football, how to play the harmonica, how to play backgammon, Finnish, how to put in contact lenses, how to make furniture, how to wire a house, how to communicate with your loved ones, how to fix a television set, how to whittle a whistle from a stick, how to make your own shoes, et cetera and so forth.

Of course, this is impossible.  As much as the Good-Regulator Theorem might exhort us to learn everything, it is just not the only natural law we have to respect. We simply cannot learn everything. We absolutely must choose. Fortunately, our life-systems go a long way to choosing for us.  Remember that Simbo never gives Rumbo free rein over Zoombo’s complete behavioral repertoire.  Instead, Simbo offers Rumbo access to only a restricted subset of that repertoire and blocks access to the rest.  Likewise, our own life-systems present us with certain opportunities while blocking us from others.  Another thing that lightens our burden of choice is the fact that just about everything we might learn is useful in some context and the more passionate we are about learning something, the more we will tend to gravitate to just those contexts that make relevant what we are learning.  For example, if you are passionate about History, you will seek out situations that make your passion relevant  perhaps you will write history books or go to work in a museum.  If you want to learn Japanese you will surely seek out others who speak Japanese and who need you to speak it too.  Whatever we choose to learn, we don’t have to wait around passively until we might need it.  We can deliberately insert ourselves into situations were we must rely on it.  In fact, this is actually just another form of expertise and is surely one we would all be wise to acquire: the ability to recognize that our current situation is irrelevent to our expertise, the knowledge of which situations are relevant to it, and the competence to reposition ourselves into those situations.

Although these kinds of factors lighten our burden of choice, sooner or later we have to choose and when it comes to figuring out which expertise to acquire, the choices are often overwhelming.

But of all the expertise we might acquire, and despite its stubborn silence on the matter at large, there is one fairly precise category of expertise that the Good-Regulator Theorem actually does endorse as being especially useful for all of us.  This is that particular category of expertise that is needed for the design, construction and use of models.  Actually, it is even more specific than that because the models in question are not just any old models made out of any old stuff.  First of all, clearly, the models must be representations of the systems we hope to regulate.  And secondly, as the theorem pertains to systems in general, the models have to be built from the same resources that will be used to regulate their respective systems. That is, if we hope to use some clay, strings and rubberbands to regulate some system, then we have to use those same clay, strings and rubberbands to build the model of that system. Of course, in the context of our current discussion of what we are calling life-systems, these regulatory resources are just human brains and bodies.  Thus, the Good-Regulator Theorem endorses that relatively specific category of expertise we need in order to transform our own brains and bodies into models (representations) of our respective life-systems.

To see how this endorsement is made, let’s first approach it by analogy.  Suppose somebody handed you a large box of rocks and said, “here is a box of rocks and-oh-by-the-way there are also some uncut, unpolished diamonds in there as well.”  Now, suppose you open the box and you look through the rocks but all you see is a bunch of rocks.  We’re assuming here that the guy wasn’t lying and so the diamonds are there somewhere, but you don’t recognize them as diamonds because they aren’t cut and polished and you aren’t a geologist or a jeweller.  All you see are a bunch of rocks.  Based on just these assumptions we can deduce the following:

1.       As long as we can’t distinguish between a rock and a diamond, every rock in the box is valuable to us, just by virtue of the fact that it might be a diamond. 

2.     The expertise required for distinguishing between rocks and diamonds is also valuable to us.

Make no mistake about the first point: as long as we cannot distinguish between an ordinary rock and a true diamond, every rock in the box is valuable to us.  And this is not merely some potential or hypothetical or imaginary value. It is real market value that could be given a dollar amount, although the actual amount would have to be somewhat discounted from what it would be if we were certain we were pricing a genuine diamond  it would have to be discounted in order to account for the uncertainty involved.[17]  Also, the market in this case would have to consist only of people like us who couldn’t distinguish between rocks and diamonds  a single geologist milling about could obliterate both our ignorance and the value of the ordinary rocks.  In any case, given these assumptions  the uncertainty discount and an ignorant market  the dollar amount that we could give to a random rock would be both a good deal more than it would be for a rock known to be just a plain old rock and also a good deal less than it would be for a genuine diamond, but every rock would have value to us.

As to the second point, although every rock in the box would be valuable to us and anyone else who can’t distinguish between an uncut, unpolished diamond and an ordinary worthless rock, the real world at large is full of geologists and jewellers who can, and if any one of them should sneak his way into our closed market we will surely pay the price for our ignorance.  If we are to have any reasonable hope of getting a fair price for our rocks we absolutely must obtain the expertise we need to distinguish between diamonds and rocks.  Hence, that expertise is valuable to us.

This argument with the box of rocks and diamonds is analogous to the argument for the Good-Regulator Theorem’s endorsement of the expertise needed for the design, construction and use of models.  The logic runs as follows: The Good-Regulator Theorem tells us that “every good regulator of a system must be a model of that system”.  Because it must be a model of that system, it must also be inside the conceptual box that contains all of the ways we might model that system with the resources we have at hand to regulate it.  But that box also contains a great many models of our system that simply aren’t good regulators of that system.  In otherwords, the box contains at least one “diamond”
and a bunch of ordinary “rocks”.

Furthermore, as true as it was with the real diamonds and rocks, it is also true that as long as we cannot distinguish between our good regulator model and all of the others, then every possible model in the box is important to us, by virtue of the fact that it might be the good regulator we seek.  Also, because we are certain that at least one of the models in the box is the good regulator we require, the expertise needed to distinguish between models that make good regulators and those that don’t is also important to us.  This expertise is important because even though we can’t distinguish between them, the world is full of “experts” who can, although these “experts” aren’t really people but rather the various scientific laws that will surely penalize us in all sorts of ways if we attempt to use one of the wrong models to regulate the system.  If we are to have any hope of optimally regulating our system as simply as possible, we have to find the right model, which means we need the expertise to do so.

We need to add just one final step to this argument which is to observe that the expertise needed to distinguish between models that make good regulators and those that don’t is just the same expertise needed to design, build and use models.  This last point can be seen by recognizing that we cannot reasonably and conclusively determine whether a given model will actually regulate the system it models without actually testing (using) it, which implies that the model must also have been designed and built, which implies that we must have the expertise needed to do all of that.  Conversely, if we have all that expertise, we are also equipped to determine whether a given model will regulate the system it models.  Thus, the two types of expertise are equivalent.

Although at this point we have pretty much explored the full extent of the Good-Regulator Theorem’s endorsement of modeling expertise, we can, if we wish, take a shot at making our understanding even more precise, as long as we are willing to look beyond the direct implications of the Good-Regulator Theorem and delve into its larger context which is Science.  It could be plausibly maintained that the whole of Science arose in an effort to answer the very questions that the Good-Regulator Theorem refuses to answer. In a sense, the Good-Regulator Theorem tells us that some models are important, but it doesn’t tell us which models are important.  On the other hand, it is the whole purpose of Science to figure out which models are important.

Metaphorically speaking where the Good-Regulator Theorem passes the puck, Science slaps it past the goalie and into net.

In the Introduction to this tutorial I explained that an understanding of the Good-Regulator Theorem can be imagined to lie along a continuum.  At one end are those with a vague and rudimentary intuitive understanding and at the other are those with an enriched and highly developed (although perhaps still intuitive) understanding.  Well, if there is any class of people that can be found at the enriched end of the spectrum it is surely the scientists.  Perhaps nobody understands better the value of a good model than the scientist, although it is unlikely that any but a minority have ever even heard of the Good-Regulator Theorem.  Conant and Ashby published their paper in 1970 and in the last forty years it has achieved a quasi-famous status, but it is still relatively obscure. It is not what you would call mainstream science.  Most scientists have never even heard of the idea, although it is a conceptual cornerstone of the work they do on a daily basis.  As far as they are concerned, they have no need to prove that a good system regulator is always a model of the system it regulates.  They have seen enough good system regulators that are such models and that is enough evidence to convince them to procede with their work.

Does Science’s home-run record mean we should all dedicate our lives to Science?  Certainly not.  We live in a complex civilization that defines and requires a bazillion different types of experts for its stable function and maintenance.  We need scientists, yes, but we also need small business owners, teachers, doctors, musicians, electricians, farmers, computer programmers, ballet dancers, lawyers, poets, corporate executives, hotel managers, politicians, plumbers, make-up artists, salesmen, historians, truck drivers, sculptors, mail carriers, journalists, physical therapists, et cetera and so forth.

On the other hand, Science has figured out a relatively systematic, crack-shot way to sift through all the bad or ineffective models and to pick out the really good-ones.  Furthermore, the effects of all these successful, scientifically developed and tested good-regulator models are everywhere and accumulating.  We see their effects in our cell-phones and antibiotics; our eye-makeups and skin creams; our nuclear powerplants; the options we have available for conceiving children; our desktop computers; our automobiles; our genetically enhanced vegetables, fruits and meat; our communications satellites; and even in the memory-foam pillows we rest our heads on while we sleep. Because of this state-of-affairs, although most of us won’t become scientists, anyone participating in this civilization of ours really should have access to an adequate science education.

So, even though it is not really deducible from the Good-Regulator Theorem as such, it appears to make a lot of sense to specify that the expertise needed for the design, construction and use of models should be at the very least a scientific form of expertise.

And we don’t have to stop there either, although once again we will have to step beyond the direct implications of the Good-Regulator Theorem.  That special branch of the Cultural Universe that owes its genesis to the passionate imaginations of Humanity’s poets, painters, sculptors, actors, dancers and musicians is a virtual Eldorado of intelligence and insight into the design, construction and use of models.  It couldn’t possibly be prudent to exclude the accumulated wisdom of Humanity’s artistic traditions from that category of expertise.  So, it’s probably safe to assume that it should be both scientific and artistic.

(At least.)

And in our quest to clarify what we understand as the Good-Regulator Theorem’s only obvious endorsement, let us certainly not forget that Humanity has been trying to figure all of this out for many, many thousands of years, and also that some 5000 years ago it finally figured out how to take notes on what it was learning. The accumulation of all those written notes over the past 5000 years or so is what we mean by the historical record.  And even though the Good-Regulator Theorem doesn’t actually say that we should do so, it just seems sort of obvious that it would be a good idea to try to use all of those notes to make our efforts to design, construct and use models as easy as possible.  In otherwords, in addition to being a scientific and artistic form of expertise, it’s probably a safe bet that it should also be grounded in a knowledge of history.

Again: at least.

By now I hope you can see where we are going with all of this.

Summary: The Good-Regulator Theorem tells us that to get what we want out of life in the simplest manner possible, we have to become highly predictable which implies that we have to acquire expertise.  Although no expertise is excluded from this exhortation, at least one type of expertise appears to be endorsed by it as especially useful and important which is the expertise needed for the design, construction and use of models, where these models are specifically to be representations of the systems we wish to regulate and where they are also to be constructed out of the resources we have available to regulate those systems.  Next, it seems a safe bet that this expertise ought to be at the very least of a sort that is scientific, artistic and grounded in a knoweldge of history, although these latter attributes are not directly inferrable from the Good-Regulator Theorem.  Finally, as it pertains to our current discussion, this expertise is most specifically that needed to use our own brains and bodies to represent our respective life-systems.

 

Although this endorsement by the Good-Regulator Theorem of expertise in general and more specifically of modeling- or represantional-expertise is fairly specific, it still leaves many unanswered questions for which we  who carry the responsibility of designing our own life-system regulators  simply must find answers.  Fortunately, when we lift our heads up from our careful study of the Good-Regulator Theorem, we can see that we are surrounded by such answers, as will be shown in Part 4.

Back to top

Part 4: Modelophilia!

In Part 3 we examined the question “which mapping?” from the perspective of expertise  specifically, the expertise we need to use our own brains and bodies to represent our life-systems.  In this section we will return to this question and attempt to answer it from a different perspective  one that we might call the “modelophilia perspective” (to be explained shortly).

We began the discussion in Part 3 from the fairly high level of your own life-system. But that life-system is actually comprised of a huge number of smaller subsystems that include, for example, your family, your friends, your dog or cat, your laptop computer, the shoes you wear, the games you play or even that pile of dirty dishes in the sink.  All of these subsystems are systems in their own right.  They are all comprised of elements that form a whole and can change states (exhibit behaviors).  Furthermore, they are all governed by the Good-Regulator Theorem, which means that whenever you optimally regulate them in the simplest manner possible, you become a model of them in the sense that your behaviors are their behaviors as seen through some mapping.  In fact, the particular mapping that you use to regulate your life-system as a whole is actually comprised of all of the smaller mappings that you use to regulate these smaller systems.

Again, none of this is to say that in order to regulate the pile of dirty dishes in your sink that you must somehow make yourself look like a pile of dirty dishes. What it means, in terms of expertise, is that whenever you recognize that there is a pile of dirty dishes in the sink, you know what to do about them and you have the competence to do it.  To the extent that you actually demonstrate that competence  every time there are dirty dishes in the sink  you turn yourself into a model or representation of the dirty dishes in the sense that the dirty dishes become a reliable predictor of your behavior.

Now, I’d like to point out that I did not actually mention anything about washing dishes in that example.  All I said is that you would recognize the dirty dishes, know what needed to be done and have the competence to do it.  I deliberately did not say anything about washing dishes because that implies a particular way to map one of your behaviors to the behavior of the dish-system you are trying to regulate and I want to emphasise the point that the Good-Regulator Theorem makes no comment on such details, one way or another.  We certainly cannot use the Good-Regulator Theorem to argue that somebody simply must wash the dirty dishes!  The theorem only says that some sort of mapping must be used in order to optimally regulate the dish-system in the simplest way possible. Whether we actually wash the dishes, throw them in the trash or mail them home to mom is an open question.  This “open question,” of course, is none other than our question “which mapping?”

Naturally, it would seem a little preposterous if we actually did throw the dishes in the trash just because they were sitting in the sink smeared with food.  And I’m sure all of our mothers would find it outrageous if they began to receive from us weekly packages of dirty dishes.  For various reasons it just seems utterly obvious that there really is just one particular mapping to be used in this situation: the one that maps the dirty dishes in the sink to our own ability to wash them (or at least, to put them in the dishwasher).

But why is it so obvious?  Well, for one thing, it’s what everybody else does.  Nobody throws their dirty dishes in the trash (unless they are paper dishes, in which case it would be just as preposterous to wash them).  And we can hope that nobody mails their dishes home to mom.  Everybody just washes their dishes.  It’s just what we do.  If everybody threw their dishes in the trash, well, then maybe we would to, but nobody does that, and so neither do we.

Of course, it’s not really that simple and arbitrary.  There are some good reasons that we wash our dishes.  It’s less expensive than just tossing them in the trash and our moms would surely disown us if we mailed them home.  But if you think back to when you first started doing dishes, I’m pretty sure you’ll see that it never even crossed your mind that there might be some alternatives to the whole dirty-dish problem.  Back then, you almost certainly just started doing dishes because, well, that’s just what everybody else was doing.  Your mom or dad probably just asked you to wash the dishes and your dishwashing career took off from there.  Although when we think about it we can see that there are good reasons to continue washing dishes, we almost certainly didn’t start washing them for those reasons.  We just washed them because, well, they were dirty and that’s what one does with dirty dishes.

The point I hope to illustrate with this example is that although the question “which mapping?” can be a tough one to answer, it doesn’t always have to be so tough and in fact, for the vast majority of the mappings that we actually need it can be downright easy to figure out an excellent, if not the best answer: just look around and see what everybody else is doing.

The Human Cultural Universe is teeming with vivid, full-color, live-action demonstrations and examples of simple and ingeniously effective answers to the question “which mapping?”  For just about every system you could ever wish to regulate, you can find a good to great to downright excellent way to map your behaviors to the behaviors of that system and transform your brain and body into a good-regulator model of that system.  You can find this mapping just by looking around to see what everybody else does in that same situation.  Did your life-system just execute some behavior that made your dog sick?  Try mapping to that behavior your own take-your-dog-to-the-vet behavior.  How do we know this is a good mapping to try?  Because that’s what everybody does!  Did your life-system just execute a behavior that resulted in your friend giving you a birthday gift?  Try mapping that behavior to your own say-thank-you behavior.  Again, we can be pretty sure that this is a good mapping to use because it is pretty much what everybody else does too in that same situation. But before you try to get creative by taking your healthy dog to the vet as a response to receiving a gift from your friend, you might want to look around to see if anybody else does that sort of thing.  If you don’t see anybody else doing it, then either you are a true innovator or else it might just be a bad idea.  In that particular case you will surely waste money on an unnecessary visit to the vet and your friend will probably feel like the gift wasn’t appreciated.

The Human Cultural Universe is a vast repository of the accumulated wisdom of our species.  You have spent your entire life studying its treasures and absorbing its wisdom into your own brain and body and you will continue to do so for the rest of your life. This process is such a normal part of everyday life that it is easy to miss it and become like the fish that can’t see the water it swims in, but it is always there.  Now, just as an easy way for a fish to see the water would be for it to jump up out of that water, an easy way for us to see the influence of the Cultural Universe on our lives is to refuse one of its recommendations, such as we did in the last paragraph when we took the healthy dog to the vet as a response to receiving a birthday gift.

I want to emphasize here that we really could do this sort of thing, if we chose to. It might be bizarre, but as bizarre as it might be, it would still be possible.  On the other hand, we can contrast this sort of situation with one, for example, in which an ice cube bursts into flames as a response to our touching it with a lit match.  Now, clearly, that would also be bizarre, but in a completely different way.  In the first case the decision to respond to the gift by taking the healthy dog to the vet is bizarre because it violates a system of culturally endorsed mappings between certain life-system behaviors and our own behaviors.   That’s not to say that these culturally endorsed mappings don’t also make sense. In fact, a primary reason they are endorsed by the culture is because they do make sense.  But no matter how much sense they make, it is still always possible for us to act against them.  On the other hand, we do not have this sort of freedom when it comes to setting an icecube on fire.  That sort of thing is bizarre for a completely different reason: because it violates the laws of chemistry.

Although we certainly do have the physical freedom to decline any or all of the mappings offered to us by Culture, because these mappings are often sensible, it is pretty much the definition of Stupidity or perhaps Insanity to do this sort of thing on any sort of regular basis.  On the other hand  as suggested above  under certain highly restricted circumstances, these sorts of cultural norm violations can reveal themselves to be improvements and eventually come to be endorsed by the Cultural Universe at which point we look back on them as ingenious innovations.  In fact, the entire Cultural Universe can be conceptualized as an enormous museum in which on display is every mapping that was once a violation of some cultural norm but which then eventually came to be seen as an ingenious innovation.

There are at least two important techniques that the curators of this museum use to display these mappings (these so-called “curators” are just us  you, me and the rest of Humanity).  The oldest technique is without a doubt just to train some human being to behave in accord with a given mapping and then to let that human being roam around illustrating it for anybody else who wants to learn how to use that mapping.  Today we refer to these people as “role models”, although the role of role-model existed long before we called it that and certainly pre-dates the existence of our own species.  Some well-known non-human examples of such observational learning include birds that have to learn their species-specific song dialects from other birds and honey bees that learn how to locate new sources of nectar from a special “dance” that is performed by the hive member that found it.[18]

The second technique  which so far appears to be a purely human sort of behavior  is to create some sort of external representation of the mapping  a physical artefact of some kind  that explictly shows us how to map our own behaviors onto the behaviors of the systems we wish to regulate in order to bring about the outcomes we desire. These representational artefacts are what we have been refering to throughout this tutorial as models.  We have also been calling them representations.[19]

Here we should remember that what we have been calling a “good-regulator model” is actually a specific type of model.  Not every model is a true “good regulator model”. The definition of a good-regulator model is that it is a model of a system that also optimally regulates that system and which does so in the simplest way possible.  Although many models might represent a system in one way or another, they might easily do so without actually regulating that system  optimally, simply or otherwise.

Let’s also recall that the term model, as we are using it here, does not simply refer to model trains or architectural models or road maps or business models.  It refers to representations in general: i.e. all models great and small, conceptual or concrete, whether they are made out of balsa wood, plastic, paper, wire, cotton balls or neural firing patterns (e.g. memories and mental models).  Understood in this way, it is obvious that these models are extremely important to us humans.  For one thing, they are literally everywhere.  It’s important to understand this point.  In order to make it as clear as we can, here is just a sampling of what we are referring to with the word model:

·        Any list is a type of model  a to-do list, guest list, book index, travel itinerary, list of ingredients, this list of models, etc.

·        A city street map is a type of model.

·        A reflection in a mirror is a type of model.

·        Any spoken or written sentence is a model of the real-world events or objects that form the topic of that sentence.

·        A restaurant menu is a model of the food the restaurant prepares and sells.

·        An accounting register is a model of a company’s financial activity.

·        A set of instructions for a game, such as Chess, are a model of that game.

·        A photograph is a type of model.

·        Your annual tax return is a model of your income over the year.

·        An audio or video recording is a model of the actual sounds or images used to make the recording.

·        A job description is a model of an employee’s role and responsibilities in a company.

·        A memory in your brain is a mental-model of some experience you lived.

·        A piece of sheet music is a model of a given piece of music.

·        A cooking recipe is a model of a given dish.

·        A library’s catalog is a model of the library’s books.

·        A template, such as a rubber stamp or a stencil, is a model of some pattern, form, block of text, etc.

·        A business plan is a model of a business.

·        A census is a model of a given population.

·        A project manager’s work-plan is a model of the tasks to be accomplished throughout a project.

·        A representative sample is a model of the substance or population that provided the sample.

·        An abstract symbol (e.g. a red cross, the word pencil) is a model of the actual thing, idea or institution represented by that symbol (e.g. the Red Cross organization, an actual pencil).

·        An ethical rule, such as “be kind to strangers” or “always tell the truth” is a (mental) model of some ideal behavior.

·        Many children’s toys are models  cars, boats, airplanes, dolls, puppets, stuffed animals, houses, kitchen appliances, assembled puzzles, game pieces (e.g. Monopoly, Battle Ship, etc.).  Note that many children’s toys are used for making models  blocks, Legos, Lincoln Logs, erector sets, scale model kits, etc.

·        A university chemistry textbook is a model of the basic chemistry knowledge to be learned by a chemistry student.

·        A song written, for example, in the key of C major, is a model of the same song transposed, for example, to the key of F major.

·        A written constitution is a model of an organization, such as a state, a club or an educational institution.

·        A sculpture is a model of the artist’s idea for that sculpture.

·        A quantitative measure of some attribute of a thing, (e.g. its length, weight, density, etc.) is a model of that attribute.

·        Your reputation  i.e. the ideas, evaluations, memories, etc. that others have of you in their heads  is a (mental) model of you.  Your reputation can take on more durable forms as well, for example: your career résumé, your credit report, your academic transcript, or your profile in an online social network (Facebook.com, Classmates.com, etc.)

·        A “friendly hacker” of the kind hired by organizations in order to test their computer network security systems is a model of a real hacker of the kind who tries to break into such systems in order to steal data.

·        A history book is a model of a sequence of historical events.

·        A key is a model of a lock’s keyhole and internal mechanism.

·        A legal contract is a model of the behavior of those bound by the contract.

·        An understanding or an explanation of some thing or phenomenon (e.g. a mechanic’s understanding of the way a combustion engine works or a physicist’s explanation of lightening) is a (mental) model of the actual thing or phenomenon.

·        A system of classification (e.g. the periodic table of the elements, or the spectrum of colors) is a model of the classified items.

·        A (school, company, home) fire-drill is a model of the events that ought to occur during an actual fire in order to ensure the safety of the participants in the drill during an actual fire.

·        A teacher’s course syllabus is a model of the course he or she will teach.

·        A scientific theory or mathematical theorem, such as Einstein’s famous E=mc2 or Darwin’s theory of evolution by natural selection is a model of some aspect of the way the real world works.

 

I’ll stop there, but I’d like to point out that this list could just go on and on.  The above list (a type of model) is only a sampling (another type of model) of the all of the models we humans use it is only intended to give you a rough idea (e.g. a mental model) of the astonishing ubiquity of models throughout the human Cultural Universe, which you can start to glimpse in this list.  We are literally surrounded by models.  They are literally everywhere and we seem to use them in nearly everything we do.  They form a solid cornerstone of human civilization and a fundamental element of the human habitat, much as our air, water and food supplies.  We are constantly making and using them.  We start off playing with them as children (dolls, toy trucks, etc.) and then we grow up and use them in just about everything we do from grocery shopping to constructing office buildings.  In analogy to the terms biosphere and biophilia, we might even say (if you can excuse my mixing of the Latin and Greek roots) that we humans live within a modelosphere, and that we are modelophilic, i.e. in love with models.[20]  This is why I want to call our current perspective on this question “which mapping?” the “modelophilia perspective”, because one attribute that all of these models have in common is that each is a crucial piece of some culturally endorsed (“best loved”) answer to our question “which mapping?”

To clarify this point, let’s begin by observing that every model is really only a model because there is at least one observer who can recognize it as such.  (Although nothing says that this observer must be human, let’s just assume that it is human to simplify the pronouns.)  What this means is that there is at least one person who can encounter the model (see it, hear it, touch it, etc.) and recognize that it represents something else.  Note that without this observer to recognize the relationship between the model and what it represents, the relationship itself just disappears.  The fact of that relationship is a crucial piece of information and that information must be stored somewhere.  Although it might also be stored in the model itself, this is not really necessary to the model’s status as a model.  On the other hand, if that piece of information is not stored in at least somebody’s brain so that when that person encounters the model he or she thinks something along the lines of “oh, yes, that lump of car shaped plastic,” say, “is a model of a car”, then the model loses its status as a model and is reduced to just being whatever else it happens to be (a lump plastic, for example).   

Now, this association between a model and the thing it represents is a mapping.  Furthermore, although in principle this mapping is completely arbitrary, in actual practice it is anything but and is almost always endorsed by the Cultural Universe and grounded in any number of really good reasons.  For example, although you could  in principle at least  use a photograph of your house to represent, say, my house, or an automobile or even the state of California, it would be anything from a little to very weird to do that sort of thing.  For the most part, we tend to use particular sorts of things to represent other particular sorts of things and these tendencies are all very well known and endorsed by the Cultural Universe: a photo of a given person is used to represent that particular person, a map of Detroit is used to represent Detroit (and not, say, New York City), a menu from one restaurant is used to represent that restaurant and not, say, the library of Congress.

Of course, these are all tendencies and every now and then we might use a given model to represent something other than what it is usually used to represent (as when a photo of your beloved prize-winning Irish Setter named Shakespeare shows up in a textbook on dogs and is used to represent that particular breed), but even these exceptions tend to be endorsed by the Cultural Universe and are so because they are fundamentally sensible.

These sorts of culturally endorsed, non-arbitrary mappings that link models to what they represent, gain much of their cultural endorsement by virtue of one simple attribute: because they work.  Now, the specific details regarding what these models actually work for are as variable as the models themselves, but from a very high-level we can say that they all help us regulate some sort of system in order to obtain certain stable sets of valued outcomes.  Now, you will notice that I said that “they help us” to regulate.  For the most part these models don’t do any actual regulating (some might, but most don’t).  The act of regulation implies some sort of behavior and most of these models make far too few state changes to be effective as true regulators.  Regulation is what we do, with the assistence of these relatively static models.  We will examine this in more detail shortly. 

Another observation we can make is that just as the Cultural Universe tends to endorse certain particular mappings between models and what they represent, it also endorses the use of these models in particular situations, in particular ways and in order to produce particular types of outcomes. In otherwords, although we are certainly free to use a grocery list in a hardware store, nobody actually does that.  Also, given that we do use the list in a grocery store, we are still free to just go grab the items and start juggling them or maybe use them to break the storefront windows, but nobody does that either.  And finally, although we might conceivably attempt to fix our leaky roof by taking the list to the store, purchasing the items represented on the list and bringing them home, again, nobody actually does that. Yes, the reason nobody does those things, although they certainly could, is that they just don’t work.  On the other hand, the situation, manner and outcomes that do work well with a grocery list just happen to be those that are also endorsed by the Cultural Universe.

Now, earlier we examined the importance of an observer to establishing that any given model is, in fact, a model.  But this observer is important for another reason as well because it is also the observer’s responsibility to know all of this additional information regarding the culturally endorsed situation, manner and outcomes that are associated with a given model.  Not that the model can’t or shouldn’t also contain this kind of information, but if it doesn’t it can still be a model and even if it does, the observer still has to have this information up in his or her brain in order to actually use the model in the culturally endorsed situation, in the culturally endorsed way, and to produce the culturally endorsed outcomes.  The point to this analysis is to see that every artefactual model (grocery list, restaurant menu, photograph, instruction booklet, etc.) is actually just one component of a larger and more elaborate model consisting of both the artefact plus the observer’s mental representation of all of that additional information regarding what the artefact represents, the situation it should be used for, the manner in which to use it, and the outcomes it produces.  It is this entire, much larger, culturally endorsed combination of artefact plus mental representation that we finally end up using to map our behaviors to the behaviors of the systems we wish to regulate, thus transforming ourselves into models of those systems.  Because we use these artefact-plus-mental models to guide or control our behavior, we might call these control-models.

Note the distinction that we are making here. Let’s go back to the grocery list. A grocery list is a model, but it is not what we have been calling a “good-regulator model”.  The good-regulator model is what the human being becomes with the help of the combination of the grocery list and the mental representation needed to use it properly.  That combination of a grocery list and the associated mental representation is what we are calling a control-model.    

This relationship between a control-model and a good-regulator model is illustrated in the Animation Panel and you should take a moment to flip over and check it out.  Remember that once you press the “Adapt Rumbo To Simbo” button Rumbo will become a good-regulator model of Simbo and in the orange panel that appears to the right of the Animation Panel (labeled “Rumbo as a model of Simbo”) you will see displayed Rumbo’s control-model.  This control-model is just a lookup table that shows which behavior Rumbo should execute in response to a given Simbo behavior so that Zoombo does what Rumbo wants.  Rumbo could use this control-model to control its behavior in the same way that we use a grocery list to control our own behavior. 

We began this recent discussion with my claim that these sorts of artefactual models (grocery lists, restaurant menus, instruction books, photographs, road maps, representative samples, etc.) are all important pieces of culturally endorsed answers to the “which mapping?” question and now we can see what that means.  First of all, each of these artefactual models is just a piece of a much larger control-model that is comprised of the given artefact along with a mental representation that enables the observer to make sense of and use that artefact to transform him- or herself into a good regulator model.  Furthermore, these control-models are endorsed by the Cultural Universe in the sense that if we look around the world, it seems that everybody is using these models in the same way.  Of course, this uniformity is not arbitrary coincidence.  The culture endorses these control-models largely because they have been proven to work.  The conclusion to be drawn from this discussion is that whenever we discover a system we wish to regulate and thus find ourselves confronted by the “which mapping?” question, one relatively easy way to answer it is to look around to see what artefactual models other people are using to regulate the same system we wish to regulate.  The artefact itself makes a handy flag that is easy to spot. Once we identify the artefact, we can set about acquiring the expertise we need to use it.  This expertise is quite simply the mental representation part of the control-model along with whatever motor skills we might need in order to actually use that control-model.

One final observation that we can make here is that everything just said about these artefactual models can be said equally well about artefacts in general.  Let’s recognize that artefactual models  schematic diagrams, recipes, CPR practice dummies, etc.  are first and foremost artefacts  things that we make and use for one reason or another.  Furthermore, no artefact is really complete in the sense that they all require the user to have a more or less elaborate mental representation of what that artefact is, the situations in which it is used and the way to go about actually using it in order to bring about some valued and well-known outcome. As an example, although an extraterrestrial blob-shaped intelligence might not know to look at it, we humans can certainly recognize that a chair is for sitting and not throwing, know that the whole “chair-situation” is mapped onto our leg-bending behaviors and not, say, our spitting behaviors, and we also have the competence to balance ourselves properly as we lower ourselves onto the chair and produce the valued outcome of “resting our wearing bones”. Furthermore, these particular combinations of the artefact plus the associated mental representations are all heavily endorsed by the Cultural Universe  if you don’t know what a toothbrush is for, just ask anyone and they will tell you everything you need to know.  Finally, this combination of the mental representation plus the artefact composes a full-fledged control-model that we humans use to map our own behavioral repertoire onto that of some real world system (situation) in order to bring about a valued outcome.  The upshot here is that the difference between an artefactual model and an artefact in general is really just one of degree. These so-called artefactual models are mainly characterized by both the quantity and the specificity of the information that they contain regarding the anwers they give to the “which mapping?” question.  That is, they tend to contain a lot more information than the typical artefact. But all artefacts  be they recipes or automobiles, road maps or coffee cups, restaurant menus or earrings  are important pieces to some culturally endorsed, more or less successful answer to the “which mapping?” question.  Thus, the “modelosphere” is really much, much larger than we first imagined it to be.  This modelosphere is nothing more or less than the whole Cultural Universe.

 

Summary: another way to answer the “which mapping?” question is to pay close attention to the recommendations made by the Cultural Universe.  These recommendations can be seen in the wealth of models  role-models, artefactual-models  as well as artefacts in general which fill our day-to-day lives.  Of course, once we identify an appropriate model or artefact, we have to set about acquiring all of the expertise that is associated with it and which is required for its effective use.  In the case of grocery shopping we mainly just need to learn to read and write  no small task in itself.  But if we wish to regulate, say, the health of human cardiovascular systems, we had better be prepared for the years of intense education that it takes to become a cardiologist.

Back to top

Conclusion: The Good-Regulator Theorem and a stable, sustainable society

Congratulations!  This completes the Three Amibos Good-Regulator Tutorial.  You now have a much more enriched understanding of what this important law from the System Sciences tells us about the way the world works and especially what it has to do with us regular folk who don’t work for NASA. To paraphrase this principle, the theorem tells us that in order to optimally regulate our life-systems in the simplest manner possible, we must become models of those systems in the sense that our own behaviors must come to represent the behaviors of those systems through mappings.  This, in turn, implies that when our life-systems confront us with any given situation, we must always “do the same thing”, because if we don’t, then we will have to endure (enjoy?) either the surprises or else the unnecessary complications brought about by exercising our free will.

We have also seen that this sort of maximally simple and optimal life-system regulation requires that we acquire expertise.  Although no form of expertise is excluded (a fact which can make us very enthusiastic about learning everything and anything) we have seen that the Good-Regulator theorem makes a special endorsement of that relatively specific type of expertise that we need to design, build and use models of the systems we wish to regulate, where it is understood that we will build those models out of the same resources we have available to regulate these systems.  Of course, as this relates to our respective life-systems, this means that we require the expertise needed to transform ourselves into good regulator models of those systems.  And although the Good-Regulator Theorem does not make any special comment at all about Science, Art or History in particular, we have also observed that it would probably be a good idea for us to reap the benefits of these kinds of cultural resources and further specify that the modeling expertise we seek should be scientific, artistic and grounded in historical knowledge.

Finally, we have also seen that a great way to identify good ways to map our behaviors onto the behaviors of our life-systems is to just tap into the wisdom of the Cultural Universe and see how others are able to regulate the systems we wish to regulate.  This can be done in at least two ways: either we can identify a role-model  i.e. another person who already knows about such a good mapping  or we can identify some artefactual model (or even just an ordinary artefact) that is commonly used to help someone regulate the system we wish to regulate.  Once we have identified a specific model or artefact, we can grab our kayaks and paddles and plunge into the great River of Curiosity for the ride of a lifetime.  That is, we can set about the task of acquiring the associated expertise  the ability to recognize situations for what they are, the knowledge of which of our behaviors best map onto those situations, and the competence to execute those behaviors in order to bring about our most preferred outcomes.

And this brings up one last question that we would be wise to consider: which outcomes should we prefer?

With this question we come to the final limits of this tutorial (i.e. “the end”).  This is a question that we all have to answer for ourselves as individuals.  But so as not to cop out entirely on the matter, I would like to propose at least one, very broad and general sort of outcome that will hopefully appeal to everybody who could ever hope to pursue any sort of goal at all: the creation of a stable and sustainable society.

The exact details of just how to go about creating such a society is a question that will ultimately have to be worked out by all of us and almost certainly through the vehicle of the liberal democracies that have become increasingly popular in the last two hundred years.  But the Good-Regulator Theorem can surely weigh in on the matter by showing us that whatever else the details will entail, at the very least such a stable society will involve a massive amount of “doing the same thing in a given situation.” If we take a moment to try and visualize what that means, it lifts the curtain on an image that may very well shock you, for in such a world there will surely be very little innovation.

Think about that for a moment: very little that is new or surprising.  The vast majority of what everyone will be doing in that world will have to be  by the Good-Regulator Theorem  quite machine-like and repetitive.  To the extent that it isn’t, there will be a risk of instability (or unnecessary complication).

Now, to those of us who have become accustomed to an almost daily dose of “revolutionary technological breakthroughs”, this might seem to imply that the price we will have to pay for such stability is nothing short of a “Global Pandemic of Boredom”.  It would appear that every moment of our lives will have to be reduced to habit and routine.  Perhaps we fear the outlawing of surprise birthday parties.

But I think such fears are hardly justified if for no other reason than because however close Humanity comes to achieving a perfectly stable society, it will still be far from perfect and there will always be room for at least some innovation.  There are just so many more ways for things to go wrong than right that it seems extremely unlikely that we will ever exhaust our options in this regard.   Furthermore, it also seems pretty apparent that some kinds of innovation carry a much lower risk to social stability than others.  Innovative Weapons of Mass Distruction would be at one end of this spectrum and innovative Chess openings at the other.  It seems to me that such diverse fields as education, basic research, games and the arts would be especially fruitful in terms of creating a stable society and provide virtually endless opportunities for innovation with a relatively low threat to stability.

Not only should we not fear such stability, but because the society that achieves it will most likely be extremely complex we encounter the enticing vision of a world replete with highly trained experts requiring massive amounts of education to maintain.  The citizens of that world will have to be much like peace-time soldiers who are constantly training their bodies and minds to be in top shape in order to fulfill the complex roles they play in that world.  They will no doubt spend a great deal of their time engaged in educational game playing and the arts.  That will surely be a world for people who love to learn, and that is pretty much how they will spend their lives  learning.

I say, let’s get started!

 

 

Back to top

 



[1] Conant, Roger C. and Ashby, W. Ross, “Every Good Regulator Of A System Must Be A Model Of That System”, International Journal Of Systems Science, 1970, vol. 1, No. 2, pp. 89-97.

[2] Of course, by calling you a “system regulator” I do not mean to imply that this is all that you are.  This is really just a shorthand way to describe an amazing ability that you have. One of the attributes that makes us human beings so special is our phenomenal ability to jump into just about any environment and figure out some way to thrive in that environment and it is our amazing capacity to regulate such a wide variety of systems that makes this adaptation possible.  

[3] In his classic work The Principles of Pyschology, philosopher William James famously described an infant’s experience of the world “as one great blooming, buzzing confusion”.  Cited in the Stanford Encyclopedia of Philosophy at http://plato.stanford.edu/entries/james/.

[4] I wrote “looks like” because I am using this expression in a figurative sense. Although, as the theorem establishes, a good regulator must be a model of the system it regulates, this certainly does not mean that it must literally look like that system, except in a figurative sense.  This will be fully explained later on in the tutorial.

[5] http://en.wikipedia.org/wiki/System

[6] Although the consequence of the interaction is defined by the Game Matrix, the exact details of just how this consequence comes about are omitted.  This might strike you as a little unusual.  Perhaps you are curious as to how, for example, Rumbo’s R(8) behavior could somehow combine with Simbo’s S(4) behavior to produce Zoombo’s Z(43) behavior. Although these kinds of questions are certainly important in general, they are, in fact, irrelevant to the current discussion.  The only thing we need to know here is the brute fact that there is some sort of “black box” mechanism by which R(8) combines with S(4) to produce Z(43), and this exactly what the Game Matrix specifies.

[7] What I am calling here Surprise (really expected surprise, explained shortly) is known more formally as the Shannon Entropy.  In the discrete case it is defined for a given probability distribution P as the expected value of the function.  Claude Shannon’s well-known discussion of the “Shannon Entropy Function” can be found in Shannon, C.E., “A Mathematical Theory of Communication,” The Bell System Technical Journal, Vol. 27, pp. 379-423, 623-656, July, October, 1948.  For a more recent and concise discussion, see the Wikipedia entry at http://en.wikipedia.org/wiki/Information_theory.  If you wish to follow Conant and Ashby’s proof of their theorem, a pedagogically sound warm-up would be chapter 1 of Ash, Robert B., Information Theory, 1990 (1965), Dover Publications, Mineola, NY, especially exercise 1.6 on page 26 of that book. Also, according to the Wikipedia, the idea for using the word Surprise to refer to the Shannon Entropy can be attributed to Myron Tribus: http://en.wikipedia.org/wiki/Self-information.

[8] For a given event E that occurs with probability p, the quantity  is known more formally as the Self-Information of E.  For more on this, see, for example, http://en.wikipedia.org/wiki/Self-information.

[9] I am speaking figuratively here.  More precisely, I should say that as the probability approaches zero, the value of the Surprise function is unbounded or that it increases without limit.  You don’t need to take a Calculus course to understand this, but if you ever have then you may have winced when I said that the Surprise was infinite.

[10] The interested reader could consult Sheldon Ross’s treatment of this topic in his A First Course In Probability.

[11] http://en.wikipedia.org/wiki/Expected_value

[12] http://en.wiktionary.org/wiki/model

[13] There is a subtely of language here which, though useful for mathematicians, is largely irrelevant to our current discussion. The idea of representation involves an association between two things, say, A and B and the first 6 examples use the word model to refer to just one of them, as in the sentence “B is a model” where the phrase “of A” is implied. On the other hand, the seventh example appears to be using the word model to refer to the actual association (function) that links A and B together. Strictly speaking, then, with respect to the seventh example, it would actually be incorrect to say that “B is a model” since the model is actually the association (function) that links A and B.  Although this distinction is no doubt important in the context of Mathematical Logic, I believe it is one we can safely ignore in the current context.

[14] If you would like to read the original paper, please review footnote 7 for suggestions about how to make this easier.

[15] In their paper, Conant and Ashby did not specifically refer to this preference ranking, but a corollary to their theorem shows that it is implicit to their argument. Although the proof of this corollary is beyond the scope of the present tutorial, the interested reader can find it (and other results) in the unpublished manuscript “Every Good Key Must Be A Model Of The Lock It Opens: The Conant And Ashby Theorem Revisited”. To receive a free copy, write to me at danielscholten@aim.com.

[16] For examples, see the articles at http://www.usatoday.com/news/nation/2006-02-26-lotteryluck_x.htm and http://www.professorbeyer.com/Articles/Lottery.htm.

[17] More specifically, if the guy who gave us the box had also told us that there were $500,000 worth of diamonds in the box, and if we counted the rocks and determined that there were 1000 rocks in the box, then until we could distinguish between a diamond and an ordinary rock we could rationally assign a value of $500 per rock.  Now, you might like to think that you would never actually pay $500 for a rock that only might be a diamond, but this is exactly what we all do when we buy an insurance policy. For most of us, these insurance policies turn out to be worth little more than the paper they are printed on, which we usually consider to be a good thing; but for a select and unfortunate few they turn out to be valuable diamonds.

[18] The classic in-depth treatment of observational learning in humans is given in Bandura, Albert, Social Foundations of Thought and Action: A Social Cognitive Theory, 1986, Englewood-Cliffs, NJ: Prentice-Hall.  The learning by birds of their regional song dialects is reviewed, along with many other examples of animal social learning in Zental, Thomas R., “Imitation: definitions, evidence, and mechanisms”, Animal Cognition, 2006, vol. 9, pp. 335-353; The bee dance is discussed at length in Gould, James L., “The Dance Language Controversy”, The Quarterly Review of Biology, 1976, Vol. 51, No. 2, pp. 211 -244

[19] Cognitive Scientists Donald A. Norman calls these Cognitive Artefacts.  For an accessible discussion, see Norman, Donald A., Things That Make Us Smart: Defending Human Attributes In The Age Of The Machine, 1993, Basic Books, New York.

[20] Another term that is relevant here is semiosphere, attributed to Estonian Semiotician Juri Lotman, which means “culture as a system of signs”, pg. 39 of Danesi, Marcel, Messages, Signs, and Meanings: A Basic Textbook in Semiotics and Communication Theory, 2004, Canadian Scholars’ Press, Inc., Toronto, Ontario)