How Do We Learn Things?

Structure and restructure

We have seen that when humans learn something, they are building what can be considered to be “knowledge structures” in the brain.  As I asserted earlier, these “structures” are mostly not hard-wired.  That is, learning something doesn’t set up explicit neural pathways each devoted to specific knowledge that has been learned.  Neural pathways are shared in all manner of ways to help retain all manner of different knowledge.  That said, using the label “structure” is useful in understanding the nature of thought and its associated knowledge though. 


This structure is not similar to the structure of, say, a building.  A better metaphor would be the “structure” of a radio signal with its carrier and modulated signal. 

In general, much of the “structure” is established by three things:

There is evidence that type 1. structuring using signal patterns is the primary “knowledge storage” medium.   It is also likely that “similarities” between signal patterns is where our context classification models come from.  Without such signal coordination, we would have to learn a different word and use for every, say, item of furniture we come across.  By building “similarity” patterns and structures we learn to recognize between (again say), tables and chairs.  This allows us to group all such items under a common classification in which all elements share certain characteristics and to which we can usefully assign the label “furniture”.  These characteristics are the similarities between the signal patterns and, of course, the similarities are themselves thought patterns.  It is also likely that other signal types are responsibility for the establishment and retrieval of these patterns and the identification of these similarities [1].

While neurons and synapses don’t statically store knowledge, there is no doubt that certain areas of the brain selectively perform certain functions.  For instance. the visual cortex largely processes visual images [2] while most higher-order reasoning and decision-making largely occurs in the prefrontal cortex.


FOOTNOTES


[1] These “retrieval” and “classification” signal types represent the process element of knowledge mentioned earlier.  And, of course, there are/must be retrieval and classification signal types of other retrieval and classification signal types—processes that identify, manage, and operate processes.  That these “higher order” processes may include the management of themselves further evidences the self-referential nature of knowledge.


[2] Interestingly, when people are attempting to “visualize” something, they often close their eyes or stare upwards in unfocused way.  They are essentially trying to look at nothing.  This may be an attempt to “free up” processing in the visual cortex by, well, looking at nothing.  The processing bandwidth that is freed up by this action may well assist an internal visualization of whatever problem or solution is being considered.  Also interestingly, this exact process used to be used in early desktop computers that were programmed to “steal cycles” from the PC’s video card when it was not being heavily used to assist in other computations.