The Hypothesis of Reflexive Learning
Learning learning

"Learning" is both the noun and the verb we use when we talk about the acquisition of knowledge.

In its noun form “learning” is equivalent to “knowledge” and learning (as a verb) is the process we use to acquire it. So learning is how we attain learning.

The English language can be confusing at times; but learning is always closely related to learning.


The Hypothesis of Reflexive Learning

In its original form, this hypothesis referenced software and/or systems development and was called The Reflexive Creation of Systems and Processes[1]. It has been rewritten here for the more general case of learning. The two parts to the hypothesis are intrinsically and reflexively referential and so hint at the presence of paradox. But then, as we have already seen, paradox tends to afflict most considerations of knowledge (or learning).

The two statements in the Hypothesis of Reflexive Learning are:

  1. The only way to acquire correct knowledge (learning as a noun) is through an effective learning knowledge acquisition process (learning as a verb).

  2. The only way to acquire an effective knowledge acquisition process (learning as a verb) is through the acquisition of correct knowledge (learning as a noun).

These statements are reflexively self-referential (between statements 1 and 2) and internally self-referential (within statements 1 and 2). The only logical definition of an effective learning process is that it allows (guides, facilitates,…) the acquisition of correct knowledge. A most reasonable criterion for the definition of correct knowledge is that it was obtained through the agency of an effective process—which process would necessarily include some procedure for assessing correctness.

It might seem to the reader that these conjectures are obvious—they are simply defining two things (knowledge and process) in terms of each other so, yeah, of course they are reflexive. This noun-verb duality occurs in most things: the process of reading cannot be separated from the thing being read and vice versa. However, there are really practical consideration relating to this issue as an example might illustrate:

Software Development and Data Flow Diagrams

We've already introduced Data flow diagrams (DFDs) when talking about Third Order Ignorance [2]. They were a very popular charting technique in software development in the 70s and 80s. They have largely fallen out of favor and have been superseded by other diagramming techniques. The charts were used to (a) assist in the analysis requirements for computer systems and (b) help to document these requirements.

The figure here shows the primary elements of the diagram:

  1. External entity: usually a square or rectangle. This shows sources and sinks of data that are external to the system

  2. Data flows: a directional arrow with defining label. This shows the “origin” (where generated) and “destination” (where used) of the data.

  3. Process or Transform: usually a circle, oval or round-cornered rectangle with defining label. This shows the presence of a “process” that purportedly “transforms” the input data into some output form.

  4. Data store: usually denoted by parallel horizontal lines or rectangle with parallel bars (as shown in this figure) with defining label. This indicates the presence of a physical store that preserves data, represented by the incoming arrow and label, for some period of time.

This is all well and good and the DFDs were extensively used, particularly by IT folks. The idea behind diagramming this information is that it (hopefully) provides a consistent structure for the definition of software requirements, ideally independent of any particular implementation approach. In doing so, it purports to structure and assist the process of learning (acquiring knowledge of) the system’s requirements. But the problems and limitations of this as a process are manifold:

Problems with Analysis Using DFDs

a. Why "Flow"?
While an analogy of fluid through a pipe was sometimes used to explain how DFDs function, the metaphor was not very appropriate because, in general, data does not flow. In the real world data just kind of sits there until something goes to get it. The standard defense against this was that the diagram shows the origin and use of data, not that data actually moves... but then why call it a “flow” chart?

b. Not a Program
The transform was often called a “process” which sounds rather like “program” and led programmers to use that as their primary understanding metaphor. The supposed behavior of a transformational requirement and the actual behavior of a computer program are quite different.

c. Granularity of Transforms
The data “transformations” identified by the “processes” were often highly sensitive to granularity, particularly in IT systems. A record level data item might be considered to be transformed by restructuring the data in the record. However, at a lower level, in the record restructuring, data elements were simply being moved around and not transformed at all. This produced an odd result in a rigorous hierarchical analysis where a “transform” at a higher level simply disappeared into nothing (ie., not a transformation) when decomposed into its parts. The amount of actual data transform in such systems (such as by, say, multiplying data values together) was often quite sparse. A lot of IT systems simply took data in, performed some simple logic tests, shifted data elements around and then stored it. So the modeling of data transforms was actually neither useful nor effective in understanding the problem to be solved.

d. Transforms that are not Transforms
Some of the data transformations were not even really transforming the data. The diagram above shows one such. The 2.1 Verify Student Name process takes in Student Name which it then passes on intact to the Student Data datastore. So, there is no transformation of Student Name [3] and, by the “rules” of DFD if nothing’s happening, if there’s no actual data transformation, it shouldn’t be on the DFD. This same process also takes in Student Id and outputs something to 2.2 Record Book Selection. This something data flow is unlabeled (which is a no-no in DFD convention), but what could it be?

e. Stateless Model Modeling Stateful Systems
The DFD is stateless. While it purports to show sequence, indicated by the data flow arrow direction, it does not show why or when such data flows might be “active” and when the transforms may occur. DFD advocates argued that it is simply a list of things that need to be done (data processes/transforms) and what they need to do it (data and data stores). But then again why “data flow”? A hint at the lack of stateful behavior is shown by 2.1 Verify Student Name. The “transform” is presumably checking the student’s name against a list of students’ names. If there’s a match then other functions are activated: a verified student can select books and an unverified student cannot. It appears that 2.1 is converting the student name into a Yes/No signal which “activates” (allows) other functions. But this is not a data transformation. The Student Name isn't being converted into a binary. In reality the intersection of Student Name, and data store Student Data is being tested and used to create a state transition signal that "authorizes" or initiates the 2.2 Record Book Selection process. The output of 2.1 Verify Student Name is a binary state signal which is not recognized by the supposedly stateless DFD model. This is, of course, why the author of this particular example was unable to label it

f. The Problem of Persistence
The data stores are supposed to be “persistent” while the data flows are “transient.” [4] But this is not defined—in the real world even transient data exists for some period of time. Again, DFD advocates insisted that data flow persistence is not an issue, that the DFD reflects “requirements” irrespective of any implementation approach, and that the transient flows are simply a where-generated/ where-used convention. Reasonable assertions, but again, why “flow”? Anyway, both of these terms really connote computer program behavior (how it works) rather than requirements (what it needs to do). Programmer after programmer fell into the trap of considering these transforms to be programs and coding them directly. And why did they do that? It was the way they had trained themselves to think.

There are other limitations of DFDs as an analytical device, and they were coupled with a “top down” (high level context to detail) analysis approach that also had serious limitations in describing what might be required of a computer system and greatly hampered good cohesive design [5] .

The primary defects in this modeling approach were (c. and d.) the nature of transforms (a telecom system which simply moves data would have a naïve DFD consisting of a single arrow) and (d. and e.) the absence of stateful behavior. These limitations led to a lot of poorly defined systems where the true requirements were simply not obtained well and not understood.

...Which Means, What?

Ok, that was a really long-winded way of showing that a poor process may result in an answer, but not usually a good or correct or complete one—in fact, it often leads to the wrong answer.

FOOTNOTES

[1] Phillip G. Armour The Laws of Software Process. p.11

[2] I am mostly using this modeling technique to point out limitations. I have nothing against it in practice except it was often used to model behavior where the constraints of the model did not match the actual requirements behavior of the system.

[3] The "standard" defense against this point was that the input Student Name is unverified whereas the output Student Name is verified. It's a reasonable point, though by DFD rules the labels should be different. However, there is still no substantial change in the data values.

[4] The defense of persistent/transient data was that persistent data "survives" the termination of the application. But DFDs are not supposed to be about "applications" they are supposed to define requirements. This was another trap that led programmers to use the program metaphor when considering systems analysis using this model.

[5] Any strict “top-down” design tends to be divergent. As the analytical process works its way down from the top, the elements identified at each level tend to diverge from each other. After some way into this journey elemental processes (which might be functionally identical in different parts of the system) will be separated and will tend to be identified as different.