Photo taken by Adrian at the Park Hyatt NingBo, China

Perhaps Consciousness is Just Human Observability?

adrian cockcroft
6 min readJun 29, 2019

--

I recently read Annaka Harris’ new book Conscious about the mysteries and nature of consciousness and had an epiphany. What we do to make our computer systems observable is actually closely related to consciousness. The essential quality of conciseness is a model of the system. That model is fed with information and interrogated about the overall health and behavior of the system. If the system is ourselves then we can ask “how do you feel?”, “do you need anything?”, “what did you learn today?”, “what do you think is going to happen next?”, “What is the solution to this problem?”, “what do you think about this new information?” etc. We have a verbal query engine for our model. The model isn’t really in charge of us in the sense that we feel it is, it’s monitoring and making sense of what is happening around it, after the event. The illusion that we consciously decide in advance is discussed and deconstructed in Sam Harris’ book Free Will. (Yes, Sam and Annaka are a couple). We can query the model to evaluate how well we think we are operating, apply feedback, and pre-calculate the effect of future actions.

The modeling capacity of the human brain is very sophisticated. All mammals are built from the same components, so it makes sense that a mouse is also conscious. Basically anything with neurons (e.g. lobsters have about 100,000 neurons) is capable of modeling its environment and reacting to danger, food etc. with an ability to update the model (learn) from experience. There’s also evidence that plants and bacteria are also capable of modeling and learning about their environment without neurons. We can call this consciousness as long as we understand that their models are simpler and can only store and react to a smaller set of inputs than a human. There is however no magical cutoff point where living things stop being conscious. There’s a school of thought that this continues into inanimate objects however there is a clear cutoff because there needs to be a place to store and mutate the model, however simple. So an individual atom is not conscious, and taking the law of conservation of energy and Shannon’s Information Theory into account, there have to be enough particles arranged in a way that can capture information and form a model that can be queried for a thing to be able to react in a way that learns and adapts to its environment. At its most basic, something like a rock formation or a crystal may be arranged in a way that reacts to its environment and captures a crude model. We build systems from crystals that we call computers which can capture increasingly sophisticated models, so in that sense computers have their own primitive form of consciousness.

We experience a very specific kind of human consciousness, so the question arises, whether we can make computers that have recognizable consciousness that we can communicate with, or whether computers will develop their own consciousness that is eventually more sophisticated than ours. I don’t see why not on both counts, and think we’re moving in that direction.

This is different to the question of whether we can figure out how to create artificial intelligence, as I don’t think intelligence is a prerequisite for consciousness, its an attribute of more sophisticated conscious systems that allows us to interact with and view the internal model more directly than observing the raw behavior of the system. It’s possible for a human to be unconscious or asleep, without losing intelligence in the process. When we wake up, we’re intelligent again. Artificial general intelligence and “common sense” require a very sophisticated model of much of the world so that we can interact with the AGI as if it was a human. I think our attempts to build AGI systems will have consciousness as part of the model of their own operations that forms their monitoring and observability capability.

If we consider a conventional computer system built to perform a common task like signing up new customers via a series of forms on a web site it isn’t conscious in itself. If all it does is log what it does, then a human operator looks at the logs and applies a model of how the system is supposed to work, to decide whether its working happily or not, and what might happen if lots more customers tried to use it at once. If we add a monitoring and alerting system to watch over the web site, then that system also has to encode a model by having someone create rules and tune alert thresholds. If we build a monitor that learns what normal and abnormal behavior looks like, and we can ask it whether the system is happy or not, then we have started to add a simple form of consciousness. If we add a capacity planning model then questions about the future can be asked, and the ability to query the observability model using natural language could make it have a human-like situational awareness that would operate and interact in way that would appear conscious to a human. As it absorbed new information and evaluated it’s own alerts and state and capacity model, it would be thinking to itself. I don’t think that’s different in kind from our own consciousness.

If we accumulate the state of consciousness over time then we end up with emotions. The (dear departed) Anki Vector robot was an interesting attempt to build an emotional model that humans could interact with. A Vector that has been idle for a long time is bored, it’s capable of detecting motion and using basic face detection to tell whether someone is paying attention to it. A bored robot tries to find a human and to attract attention. If it tries a task like piling blocks up and succeeds it makes happy movements. If its battery is running low it gets hungry and goes back to it’s charging station. It has a simple internal model that is expressed using cues that humans can understand, and it is conscious while it’s operating, and unconscious when it’s sleeping and recharging.

It makes perfect sense to me for a computer system to express its state in emotional terms. Accumulating states over time and deciding whether it is happy, sad, confused, bored, stressed, worried, hurting etc. could be a useful way to express state in a way that humans would relate to more than graphs, numbers or a mailbox full of alerts. Then we could ask the system “what’s wrong?”, “why are you worried?”, “where does it hurt?”, “what do you need?” and get useful replies that reflect the consciousness of the system.

Back in 1995, after a vacation in Australia, I wrote a monitoring tool for Sun’s Solaris based systems called “virtual_adrian”. It did what I would do if I was looking at the system myself, so I could go on holiday and still be watching over systems. It didn’t plot lots of graphs and pages of data, most of the time it just reported “No worries, mate”. That was all you needed to know. If the system was unhappy, it would report something like “Disk sd1 is slow”. It was an open source free tool, and if your machine ran out of swap space and crashed, the last message from virtual_adrian you would see on in the console log was a quote from Terry Pratchett’s Discworld “+++OUT OF CHEESE ERROR, PLEASE REINSTALL UNIVERSE AND REBOOT+++”. It seemed appropriate. To me this looks like an early attempt to give a computer a conscious sense of it’s own happiness, and a bit of personality.

Bottom line, my answer to the question about whether consciousness is an intrinsic capability of things all the way down to inanimate objects, is that the object has to have a mechanism for adaptively modeling its health and environment to be conscious.

Update: 2022 — A very interesting talk put on by the Sante Fe Institute aligns well with my thinking on this subject. I need to dig into this in more detail, Autodiagnosis and the Dynamical Emergence Theory of Basic Consciousness https://www.youtube.com/watch?v=uJabuvcQTQA

--

--

adrian cockcroft

Work: Technology strategy advisor, Partner at OrionX.net (ex Amazon Sustainability, AWS, Battery Ventures, Netflix, eBay, Sun Microsystems, CCL)