Your Personal State Machine

I’ve barely begun and I’ve already made quite the error in that I’ve told a couple of friends that I intended to start this project. So, to those friends in particular: abandon hope all ye who enter; there be monsters here. (Everone else: take notes.) Quite frankly, I’m a little aprehensive in continuing now that I’ve done that—but on the other hand, I’m quite fond of telling life kindly, yet firmly, to royally fuck off—and that includes any anxiety included therein. (You’ll soon find, if you haven’t yet already, that this is my go-to method for dealing with this overblown chemical reaction we call life.)

Let’s begin. This particular “musing” (because “blog” just seems too… blah…g) regards the nature of the human mind, insofar as we can classify it, and perhaps some speculation on the impacts that might have in the future, especially as it pertains to abstraction of emergent phenomena. As one might guess from this particular note, this post ought to be read from the perspective of a naturalist. And, to avoid the risk of utterly confusing anyone who has never heard the term, I’ll happily clarify: a naturalist (in this context) is merely an individual that believes that all things can be ascribed to some law of nature at some level of abstraction. In other words, naturalists do not believe in the supernatural. As a further point of clarification, however: naturalists do not necessarily discount strange phenomena—take magic, for example: if such a thing existed, a naturalist would believe that it could be explained through proper application of natural law.

“Any sufficiently advanced technology is indistinguishable from magic.” ~ Arthur C. Clarke

Priors

Often naturalists rely on evidence and scientific consensus to come to a conclusion regarding the nature of the universe—blind faith is not a part of the equation. And it is with this mindset that I approach my own views regarding the mind as an emergent phenomenon of the brain. This also means, however, that there are many assumptions that will need to be tackled at a later time… those might include, for example, the reasonable assurity of the lack of an omniscient diety and the supernatural in general, the necessity of evidence and the cop out that is agnosticism, and of course the correctness of evolution and general silliness of Boltzmann universes. In other words, dear reader, the passage that I’m writing will do you no good whatsoever if these assumptions are not adopted, even if only temporarily and for the sake of argument. We will certainly talk about all of the above but as I said, it will be necessary to look for it on some other post when I get around to it.

So we have a starting point, and that is that the body is construced through the proper application of proteins by autonomous biological systems without intervention by some foreign omniscient entity, and that evidence is necessary when submitting a claim as to the so-called “location” or nature of the mind. Good. From these axioms we can rule out other theories that have been put forth in the past—the notion of a soul, for example (insofar as one would normally describe it—as some other entity distinct yet coupled with the body in some fashion), or the similar notion of the mind acting as some independent agent simply puppeting the body. Again, the scope of this particular musing requires these assumptions to be viewed as priors—justifications are out of scope and will be covered later. If you’ve made it this far without jumping ship, thank you for bearing with my bluntness in the matter. I will say that I had to start somewhere and well, go hard or go home? (Now that those disclaimers are made known, we continue — again, at the reader’s risk.)

Usefulness

One thing that I really want to touch on before truly continuing is the concept of usefulness, which will likely be referenced throughout these philosophical notes. In particular, human language emerged precicely because it was useful for carrying out tasks and for the purposes of social engagement—but, as with most emergent phenomena, design considerations for language’s applicability in terms of truly describing abstract concepts were not included (and this is largely because language isn’t explicitly designed). To be clearer, language did not come about for the purposes of describing the cosmological universe or its mathematical representations (if it had, humans would likely be speaking something akin to Lincos). Rather, it came about to describe the emergent universe that we as humans (hello there, fellow human) experience every day. Therefore, layers of abstraction are absolutely necessary to achieve a level of comprehension sufficient (or nearly sufficient) for proper communication.

Take the concept of free will, for example. There exists a flavor of compatibilism for which free will is spoken of as if it exists for the purposes of furthering socioeconomic discourse (e.g. when assigning intent to a criminal offender, for example), even though those same people would assert the brain—and by extension, the mind—obeys the whims of natural law. And I tend to subscribe to this sort of thought as well, as even in a deterministic environment folks still believe and feel as if they have agency (and this is far more important to language than underlying truths). And, to be clear, this holds true for nondeterministic environments as well—though that is a topic for another time.

In the notes that follow (both in this particular post and in others) I will talk about emergence and usefulness, but here I want to draw a distinction between the two. The former is ontological in nature and regards some phenomenon that exists as an abstraction of a more fundamental part of physics, whereas the latter is an epistemological distinction that simply makes discourse easier. Take, for example, the notion of a human society. Clearly (given our priors) we can describe society as an emergent phenomenon of the human mind and of communicative practices, among other things. But this isn’t particularly useful when discussing laws, procedures, or social norms—and therefore we have the concept of society to which we can relate directly. And I tend to think that free will and the mind can be spoken of in a similar sense.

State Machines and Consciousness

I’ve always been fascinated with toy state machines, especially when they interact as cellular automata. You may potentially be familiar with the most famous of these, Conway’s Game of Life but I’m partial to others like Wireworld or even more complicated implementations such as r74n’s Sandboxels or Minecraft. It seems to me that, given the natural nature of the brain, it could be none other than a fairly large state machine. Granted, as the body continues to grow and change as time goes on, the set of possible states evolves but the state itself is a manifestation of the mind nevertheless.

In fact, a very real implementation of this exists in the form of the OpenWorm project, where scientists and engineers are working to implement a virtual Caenorhabditis elegans nematode. Of course, this is a far cry from the implementation of a human brain, but it’s a very good step in the right direction in the pursuit of this sort of technology. As such, it’s clear to me that there are no other “things” (outside the brain and that which it directly manifests) that make up the mind itself. And there are serious experiments and happenings that have occurred to support this. One of the most well-known happenings would be that of Phineas Gage, who experienced a serious set of neurological and emotional changes after a tamping iron was lobbed through his left frontal lobe. Not to mention that in modern medicine, psychiatry is alive and well.

To deviate slightly, there is some discourse, particularly when it comes to future possibilities of so called “uploads” (where theoretically one’s mind could be uploaded into a computer for perpetual existence), wherein experts in the field discuss whether or not the rest of the body (outside the brain) is necessary for the proper execution of the state that represents the mind. Psychologist and neuroscientist Lisa Aziz-Zadeh, for example, is known in part for noting that consciousness relies on more than those chemical reactions found in the brain, and in fact necessarily relies on signals generated and sent by the body. (There is a decent episode of Sean Carrol’s Mindscape podcast in which she elaborates further on this notion, the transcript of which can be viewed here, but as a side note I would in general recommend that particular podcast if you’re someone who enjoys hearing from experts on a multitude of topics.) Yet, these claims seem to assert less that the mind and consciousness can’t be simulated through silicon technologies, and more that our current experience as human beings—likes, dislikes, desires, pain—rely on some representation of the body. This merely implies that if humans could be simulated, then that experience may be radically different than what those of us in the “real world” experience, and if humans can ever choose to be “uploaded” (and we’re carefully treading on the line of science fiction here), that state machine that describes our mind would necessarily be modified accordingly. I don’t think it’s all that unlikely that any individual “uploaded” (in this hypothetical) would in fact experience a sort of stagnant ennui, rather than experiencing the joys associated with being a quasi-god in your own universe. Without corporeal needs, human instinct likely ceases—(insert some terrible anecdote here regarding the human need for suffering to act as a sort of control for comparison against joyful human experiences).

Good. So at this point I’ll break off from that particular tangent and return, for the time being, to reality. I don’t mind entertaining well-founded hypotheticals but I also want to take care not to go too far down the rabbit hole.

Scientific and Manifest Images

It would be irresponsible to speculate on the nature of the mind and of consciousness at large without mentioning the Hard Problem of philosophy and distinctions between the scientific and manifest images of the world which, succinctly, allude to failures in language (as mentioned previously) to properly put into context the human experience in scientific terminology. I do think this is the very problem that leads so many individuals to philosophical suicide. In essence, the current inability to adequately describe human experience in a satisfying manner (e.g. “why am I in this particular body, seeing through these eyes, hearing through these ears?), although most serious neurologists would agree on classifying consciousness as an emergent phenomenon of the brain, shows where we fall short in terms of communicative ability.

And yet, the current inability to solve an equation should not lead one to toss mathematics out altogether. Humanity has not yet given up in trying to solve P vs NP, for example. Likewise, the jump to a supernatural mind requires the tossing out of a large swath of evidence and a jump of faith into, by definition, unexplainable waters of an unfathomable depth.

Wrap Up

This particular musing serves as part of the foundation for a larger philosophy that I hope to publish at some point, mania providing. As mentioned, there are several assumptions herein that I’ll touch on later, and implications to claims that I’ll also visit. But my hope is that this establishes how I will be treating the mind in future discourse.

lordinateur.xyz