Tuesday, July 26, 2011

On the Rise of Subjective Experience, Part 2: Brain Loops and Strange Loops

Although we now have a collection of facts about the physical organization of the brain, understanding how these unitary physical processes can lead to a cohesive information processing structure is still a task so difficult that no one has satisfactorily done it. I find that, given our still relatively crude tracing of information through the majority of the cortex (The best such example being here), it is more instructive to shift gears for a moment, and talk about Douglas Hofstadter's concept of the strange (causal) loop.

Strange loops are defined as a feedback loop in which the input and output cross hierarchical levels. Now, typically this is a bit more ill defined, but here we will attach specific meaning to this concept - namely, that complexity increases when increasing in hierarchical level, and decreases when decreasing in hierarchical level. In practice, complexity here is the definition proposed by Kolmogorov, and widely accepted in computer science: the complexity of a string of information is equivalent to the length of the shortest program that could produce it. A corollary that quickly follows is that complexity of a string of information is roughly equivalent to the resources (time, energy) needed to produce that information.

In the brain, very complex higher-level constructs have evolved from patterns in incoming perceptual data, and these constructs can rightly be called concepts, ideas, emotions, etc. In fact, this level of pattern is complex enough to be essentially infinite, in terms of the combination of lower level concepts. If I ask you to imagine a "Pink Unicorn", you can easily conceive of it, despite the fact that it has no basis in reality. You simply construct it from the lower level concepts of "horse", "horn", and "pink".

At a much lower level, there are innumerably more simple firings - neuron to neuron transmissions coding not much more than their excitatory or inhibitory value. The key is that when looking at the structure of this activity, it is equally valid to talk about the emotions affecting the state of the brain, down to individual neurons (although it's a task beyond modern science to bridge these levels), as it is to talk about the interactions of all the low level, mechanistic neural activity creating these concepts. For a better metaphorical explanation, Hofstadter himself explains in this video (you'll need to click the second one for the full explanation, should it hold your interest):


Victim of the Brain - Youtube


The point to internalize here is the concept of recursive looping that was first intimated in talking about the physical functionality of the brain itself. This informational looping dips causally down to the simple levels of incoming perceptual data and outputs to organs and muscle, and all the way up to the abstract and infinitely combinatorial semantic constructs humans are capable of conceiving. Decision making causality can be explained equally well from a top down (mental constructs affecting brain) or a bottom up (moleculaes affecting neurons, and so forth) approach. The important trade off is that absolute information (resolution) is lost as the perceptual hierarchy is climbed, with the benefit being explainability - just the concept of "cat" quickly conveys a large amount of information between people, but leaves out a great deal of specific information about any specific cat.



Why this interplay between perception and conception? Clearly, the answer lies in evolution. Consider the following - the world you experience each and every day is not, in fact real. It is a simulation of what the brain THINKS the "outside world" is, based on strings of perceptual data. This is a key difference to consider! The actual reality of the outside world could be  vastly different than how we perceive it, but what is important is that our "virtual reality" constructed by the brain was (and is) useful to genetic fitness. This obviously suggests a high correlation with reality, but does not ensure it. Illusions are clear evidence of this:


More on this specific illusion later when discussing the processing structure of intelligent systems.

So then, the brain creates an incredibly detailed, constantly updating simulation space, but this alone is not sufficient to have a conscious system. In my view, it is likely the creation in the mind of the self as an avatar to place in the simulation that leads to the sensations of qualia. Consider, for example, that humans are one of only a few animals to "plan ahead" in such a way that situations totally unfamiliar to us can be overcome by use of our inductive reasoning from previous similar situations - we know the laws of physics at some intuitive level, for example, and thus can set traps to catch food.

The ability to construct detailed and relevant simulations hinges partly on the ability of the mind to properly project the abilities of the self into future hypothetical situations. This self-avatar is constantly updated from within and outside the brain, as opposed to the simulation environment which is largely constructed in lower perceptual levels. Less processing is required from perceptual data to construct an environment than is needed to construct the self-symbol, or indeed even integrate new data into the self-conception. This suggests the creation of consciousness is a high level, attentional, and resource-heavy process.

This "simulated world" is a view shared by Antti Revonsuo, and is really at some level functionally identical to all theories of qualia that involve the system "watching" itself in such a way that the processes of a self-updating central self symbol.

At this point, it is likely that there is a certain level of skepticism regarding this hierarchical level-crossing looping organization of informational transactions in the brain actually being able to produce subjective experience. And, frankly, that's a valid criticism. Without invoking more than classical information theory, it is safe to conclude that only physicalist explanations of qualia make sense. As the artificial intelligence researcher Marvin Minsky has said regarding qualia:

" Now, a philosophical dualist might then complain: "You've described how hurting affects your mind — but you still can't express how hurting feels." This, I maintain, is a huge mistake — that attempt to reify 'feeling' as an independent entity, with an essence that's indescribable. As I see it, feelings are not strange alien things. It is precisely those cognitive changes themselves that constitute what 'hurting' is — and this also includes all those clumsy attempts to represent and summarize those changes. The big mistake comes from looking for some single, simple, 'essence' of hurting, rather than recognizing that this is the word we use for complex rearrangement of our disposition of resources."

However, this doesn't mean that exploring the weirder side of theories of consciousness is without merit. Quantum mechanics, when invoked improperly, (as I will, no doubt, in the coming months) are fecund ground for all manner of untestable pseudoscience, without ever offering much in the way of tangible explanations. Quantum computing, however, is a potential consideration and has been proposed by some respected researchers, most notably Penrose and Hameroff, but also Hartmut Neven.

 Still, even without braving those waters, there is much to discuss regarding the relationship of the human brain's information processing structure to a more generalized framework of information processing applicable to Artificial General Intelligence systems. In particular, I'm tremendously fond of Ben Goertzel's work, which I believe goes a long way to bridging the gap between the physical brain state and the the patterns of informational flow that constitute a classical understanding of consciousness.

3D branching fractal structure. The blog's title begins to come into focus.

Next time: Goertzel's The Structure of Intelligence and the relationship to gamma synchrony- further applications in neuroengineering?

2 comments:

  1. http://theducks.org/pictures/do-not-want-dog.jpg

    ReplyDelete
  2. Author: http://3.bp.blogspot.com/_akMMFxZDfq8/S8YEvi4gMKI/AAAAAAAAC0s/ocKzVLYXnCo/s1600/Matrix_Laurence+Fishburne_Photo+by+Sarah+Dunn+2.jpg

    ReplyDelete