5OI: Third Order Ignorance
Without process there is no way out

Third Order Ignorance (3OI)—Lack of Process:
I do not know of a suitably effective way to find out that I don’t know that I don’t know something.

A Third Order Ignorance process is one which tells us we have 2OI.  In a practical sense, 3OI is always coupled with 2OI.  Indeed, by definition, there is no reason to pursue acquiring and using a 3OI process except to relieve 2OI [1].   

When coupled with 2OI, 3OI prevents forward movement.  True 3OI processes do not provide an answer, but they do (hopefully) provide a localized question which is, of course, the exposure of ignorance [2].

3OI also sets up a paradox: without knowing what you are looking for, how would you know what process to use?  How does a scientist create an experiment to prove (or disprove) something she is not looking for?  True, there are serendipitous actions where you are looking for something else and stumble upon a revelation that was unsuspected.  But these are not intentional actions and processes, they are accidental.

A true 3OI process often employs a discipline that forces us to look at our lack of knowledge.  An example from software development:

Build a Model: Graphical Design Methods
For a number of years, I taught software engineers  the mechanics of real-time systems development methods.  Most of these involved graphing out data transformations, their inputs and transformed outputs, and the system state-based behavior that controls the activation and deactivation of system functions.  Here is a (greatly simplified) example taken from a cellular switch test system I worked on:

These are four levels of a "data flow diagram" that purports to describe the functions in a cellular switch test system.  Note that the central function ("Test Cell Switch") is repeated.  In actual operation, the highest level (Context) "Test Cell Switch") consists of the three functions shown at DFD1 level.  That level's "Test Cell Switch" consists of the three functions at level DFD 1.1 and so on.

In the real examples from which this was taken, the labels were not so clearly replicated.  They key point here is that what the Test Cell Switch actually does was not known by the developers--they had not actually figured this out.

This was clearly evident by the lack of specificness of the function's description.  The team was tasked with creating this model, which they dutifully did.  What they for various reasons didn't do, was acquire the knowledge necessary for this to be the correct model--the model that contained the correct knowledge [3].

So the act of creating the model demonstrated that they had not acquired the knowledge of what it means to Test a Cell Switch.  This exposure of ignorance qualifies the model creation as a 3OI process.

Build a Model: Admiralty Models
In the days of sailing ships, the British Royal Navy would have marine architects build a model of the ship to be built.  These were called Admiralty Models and building them helped the navy visualize the finished ship, ask pertinent questions about it, and identify required changes.

Heuristics: Software Testing
Another example from software development is testing.  When we test software we are looking for two things:

The first of these is testing for 0OI—did we have the "correct" knowledge when we built the system and did we incorporate it all into the system in the correct way?  This is to satisfy the provable criterion of the definition of 0OI.

The second is testing for 2OI—is there anything else happening (or not happening) in the system that we don't know about? [4]

Software testing usually employs a set of heuristics: 

None of these tests are guaranteed to expose an error; indeed, there may be no errors at all.  But experience has shown that errors in software—directly related to the developers' lack of understanding—congregate at the points in systems where complex and predicate logic operates.  Therefore, directing tests to these points is most likely to expose this lack of understanding. 

Software testing, especially the "...things it's not supposed to do..." portion, is a well-defined 3OI process whose job it is to expose and localize our lack of knowledge.  Essentially it exposes 2OI and assists in converting 2OI into 1OI.

FOOTNOTES

[1] We could argue that randomly attacking a source of ignorance might be a process that will (eventually) result in knowledge.  Hitting piano keys arbitrarily might finally end up with the student pianist learning Moonlight Sonata.  But this fails the practical "...suitably effective..." criterion.  In software testingwhich is where some of these ideas originateda way to uncover locations of ignorance in a built system (as evidenced by the exposure of software bugs) is to simply ship the system to the customer.  Experience has shown that this will, indeed, tell us that there is something we did not know we did not know.  But like randomly banging piano keys, it is not a suitably effective process.  Especially for the customer.

[2] This presents a challenge in applying a disciplined 3OI processes—rather than providing knowledge they expose ignorance.  This is a major challenge for most peoplemost of us like to use processes that tell us how smart we are, not ones that explicitly show how little we know.  This has been a perennial problem with the software testing process, which does expose ignorance.  Historically, companies and individuals have sometimes shied away from good, effective testing, since it tends to shine a bright spotlight on the limitations of other parts of the software development process and, indeed, on the limitations of the software developers themselves. 

[3] This modeling activity also exposed the challenge that always accompanies a 3OI process.  The team was expecting that building the model would give them the answer.  It did not.  It cannot.  Such modeling never creates the answer.  Correctly used, attempting to build the model shows you if you have the answer or not.  The more the team applied the process, the more it told them how little they knew about what they were trying to build—which the team felt this was quite disheartening.  This kind of modeling is often a good 3OI process, but the process does not contain the answer—the answer has to come from elsewhere.  Once that knowledge is acquired (from elsewhere), if can be incorporated into the model and used for communication and later building.
This also shows that the model is not the knowledge—it is the container for the knowledge.  A model can be a model even if it does not contain the correct knowledge; but it cannot be a useful model.
Interestingly, one could argue that the original complex model from which I created this much simpler version was actually much less honest.  The labels were not so clearly replicated, indeed, the label complexities actually hid from the casual reader the fact that the team had not defined and did not actually know what "Test Cell Switch" entailed.  So the fact that this much simpler model makes this lack of knowledge clear means that this model is actually, and ironically, more useful. 

[4] Note, in general, we do not test for 1OI.  The reason is simple: if we know that there is something not working correctly in the system, we wouldn't test for it; we don't need to because we already know.  What we would do is first correct whatever is not working and then run a 0OI test to prove that we have corrected it.