In psi-fi, an important question is, can a machine be conscious? Not just smart, not even AI smart. Not just talkative. Not merely dexterous and reliable. But can a machine be self-reflectively conscious the way you and I are? We can program an android to say, ‘I think, therefore I am,” but does s/he mean it?
What we’re looking for is the nature of intrinsic human consciousness, the kind I have and you have. We’re not looking for a simulation, a master chess-player, an as-if consciousness, a “deep learning machine,” a Chinese room, or even a self-correcting, self-monitoring system.
As people, we have a sense of being alive, here and now, swimming in the flow of time, connected to others, continually challenged to make something out of nothing. Why can’t a machine have a subjective sense like that? Or can it?
Robots in my psi-fi stories get along well in human society, blend in, discuss things, drive cars, make decisions, play tennis, and have great rollicking adventures, but they pointedly lack certain abilities. They have no intuition, creativity, intersubjectivity, or sense of ultimate purpose. They struggle with idioms and metaphors. They’re good at physics and math but mystified by art and music. They’re quick to analyze and solve problems, but oblivious to the possibility of a complete re-framing.
Super-smart androids know how to behave appropriately at weddings and funerals but don’t know why. They know the anthropology, but don’t get the feelings. Even the best of them are hopeless with intimate relationships, despite using subtle pattern recognition and sophisticated output scripts. As the comedian George Burns famously said, “Sincerity – if you can fake that, you’ve got it made.”
What does the AI machine lack, exactly, that makes it come up short against human consciousness? Like Scarecrow opposite Dorothy, is it a human brain? Many philosophers would say so (John Searle being prominent among the Wizard of Oz theorists). But I think that answer just displaces the problem from the mind to the brain without answering the question. The brain is complex, but it’s not magical. Plopping a brain into an android skull would solve no problem.
We need a theory, or a hypothesis, or even just a wild-ass speculation for how a physical brain could produce intangible ideas. We have nothing. ‘More research is needed,’ as they say.
Since the 1940’s scientists have suspected that neurons are analogous to switches and their activity is analogous to computation. Neuronal activity was thus offered as an explanation for mental phenomena. That’s the source of the computational theory of mind. The brain is the hardware, genetics the software and the mind is the output. Voila.
The computational theory of mind is a tidy formula except for that last part, about the intangible world of mental phenomena being the consequence of a hardware (or wetware) operation. That is not conceptually plausible. As noted in previous posts, nobody has a clue about how mental experience could be created from any set of switch closures.
What if we bracket the hardware-software analogy? Can we get a grip on subjectivity by simply taking it as we find it in nature? We have subjectivity. We are it. Why can’t we analyze what we’ve got and find its (presumed) components? Then we could determine how those interact. Let’s forget hardware and computation for now and and confront self-reflective experience as discerned. Later, we can worry about how to implement any results in a computational medium.
The top-down approach to the mind has been tried, with interesting results. Plato (in the Republic and the Phaedrus), analyzed the mind (called ‘soul’ back then), into three interacting components: reason, will, and emotion (I’m interpreting a little). That’s not bad as a theory of mind. It sounds, it feels, at least not-wrong. Plato went on to explain how a harmonious society could be built on those same three pillars.
Freud famously had a tripartite analysis of mind too. Its components were id, ego, and superego. The id is a seething cauldron of instinctive emotional desire (especially lust). The ego is the rational mind that must protect itself from that raw emotion. The superego is the conscience, the rules of thought and conduct that society has developed over generations to manage the disruptive id impulses. The superego kinda-sorta works, most of the time, but as a lid on the id, it leaks. For Freud, those three elements of mind were locked in eternal struggle and neither a harmonious society nor a self at peace was possible.
Many other three-way analyses of mind have been offered, including my own. Yes! I would be remiss if I did not mention my own effort from the top-down camp. I too came out with a three-component system of mind (see The Three-in-One Mind: A Mental Architecture. bit.ly/3-in-1-mind). I labeled my three components the Motivational Core, the Sensorimotor Self and the Social Self. The Core provides fundamental, pre-personal motivation, not Freud’s lust nor Plato’s duty, but more like Schopenhauer’s or Nietzsche’s will to meaning. That motivation is channeled to the other two components which convert it into their own fuel for bodily and social interaction.
An obvious problem jumps out of the bushes with this approach. Are any of these analyses true? None? Some, in part? Or are they all the equivalent of horse droppings? Since the scientific method of epistemology does not readily extend into the realm of the mind, we are unable to reach a consensus. That is a separate problem to be solved, a problem of epistemological method.
However, whether the results of top-down analysis are true or false, crazy or reasonable, the point is that we need an alternative approach and we have one. The approach to consciousness that begins with hardware and computation is doomed from inception because there is no conceivable way for mental phenomena to be derived from physics. It seems eminently logical to put that approach aside and look directly at the phenomenon of interest: the mind, and especially, our sense of personal subjectivity.
We can easily examine the mind. We do it all the time in literature. I do it every day in writing psi-fi stories. No, it’s not scientific, but scientific hasn’t worked so far. So we put that aside and bite into the apple we want to taste: pure subjectivity. What the hell is it? Is it anything? If it’s a delusion, I’d like to know that. What kind of delusion? How does it arise? Or is subjectivity actually something – some phenomenon of nature yet undocumented by science?
I think any psi-fi writer working with current limitations of the scientific method must step outside the mainstream and take the top-down approach to analysis, then skillfully tether findings back to some cleat of science. That is the challenge.
To read this series on consciousness from the beginning, see:
- “What is Alive?” psi-fi.net/what-is-alive
- “Is Consciousness a Bag of Weasels?” psi-fi.net/is-consciousness-a-bag-of-weasels
- “Biting the Subjective Apple” psi-fi.net/biting-the-subjective-apple