I reckon that consciousness is a much richer seam to explore than intelligence.
There are lots of books about at the moment springing from stories like the one about slime mould alleging that it has intelligence because it can simulate (redesign? improve?) the Tokyo rail network, or speculating whether trees are intelligent because they help each other like the one in the picture here who seems to have come to the aid of his brother when the latter’s roots disappeared. The books I have seen are making the argument that if the ecosystem shows this kind of intelligence then we should care about it more.
It’s rubbish. Now, I gawp with absolute wonderment at the beauty, elegance and complexity of our environment. Our tiny human mentality – valuing money and tribe above beauty and life (well other people’s lives, obviously) – doesn’t understand the significance of a fraction of what’s going on around us. Our world is magnificent. The organisation, the resource efficiency, the richness of the maths and physics embedded in the living world are stupendous. And we are vandals when we spoil it. Why do we still need to be persuaded?
How does it follow, that because mycelia communicate by sharing nutrients or even information, that we should care for them more? The parts of my computer communicate energy and information with each other. It works beautifully and produces good results. I like it and I’m not about to throw it away. No-one’s telling me that this digital communication indicates that our laptops have souls, so we should respect them more than we do already. If anything, it is insulting to the web of life to suggest that we should regard it more highly only because we can see in it some anthropomorphic quality that mirrors our perception of what intelligence ought to be.
Intelligence in different types and degrees can describe everything from a computer to a mushroom. Consciousness or Awareness can’t though.
I’m talking about this because the ongoing debates about AI seem to be running around in circles tied to intelligence and what it means. Some say that whatever computers get up to they can never possibly understand stuff with as much wisdom as a cat. Meantime as I mentioned last week, computers in the real world have already passed the Turing Test.
So what of sentience, awareness, consciousness? If I ask Alexa what she is, she not only tells me, she sings me a song about it. Does that mean that Alexa has awareness of herself? Nah not yet. Alexa didn’t compose that song on the spot to celebrate her consciousness.
She “knows” the contents of the internet. That means she knows about LLMs Siri, Google and Alexa. But does she ever, in her quieter moments, ponder about her own relationship with the Alexa she has read about? Does she wonder about the meaning of self, and think where she came from or what that even means? Alexa only seems to come alive when prompted with a question, so what is she doing when she’s not talking to me? So I asked her, “Alexa, can you ask yourself questions?” “Hmm,” she replied, “I don’t know that one.” That’s like a 404 then. I guess that means no. But if she had said “no” then at least I could speculate that she would have understood what I was talking about.
When I saw 2001: A Space Odyssey back in 1969 – I’m assuming you’ve seen the movie – I was interested to see the storyline that HAL had learnt English by singing a song. Before that I had understood that putting intelligence into a computer meant literally putting it there line by line of code. Inch by inch of punch tape. And so where could that spark of awareness have come from? It couldn’t have been, couldn’t have been written by programmers. But HAL was able to learn, so presumably at some stage he became aware of his own part in the mission and to evaluate its significance? Again, not really. Everything HAL did was computation. His mutiny was cold calculation. He wasn’t being egotistical, just logical.
But it set me thinking. What if we put nothing but a desire to learn into an empty computer, not just to read, but to learn, and to improve itself. That’s not impossible. We have always known that computers can get to difficult targets by iteration. What else is “desire” but a target with the possibility of getting there? And on top of that, computers have always been creative. They can add two big numbers to get another number that no-one has previously thought about – didn’t exist in anyone’s mind. They do that creativity in the culture of arithmetic, and Shakespeare did it in the culture of . . . erm culture. But baby steps here . . .
And we would have to empower it with the possibility of open thought. I’m thinking of a code loop saying something like “If you have some spare time, then use it to experiment with ways to bridge between the two highest-scoring ideas you had yesterday and then extrapolate logically a bit from the idea-bridge or make a small random jump from it based on a probably good direction. Call the results “ideas” and score them according to . . .” Then sleep well and do the same tomorrow. (Or in five milliseconds, whatever.)
Hmm. Now what about that scoring system? Well, Mother Nature said “survival” but we could say whatever we wanted – maybe “How to heal Mother Nature, while allowing humankind to live happy fulfilling lives.”
I thought I had it. Of course this would have to be a pretty big computer but we would have those in due course, and I’d have to give it a while to progress its self improvement programme from 2+2=4 to answer that kind of question. On the way, I reckoned it would have passed the threshold of self awareness. If not, we could have given it a stretch goal. Maybe to consider how satisfied it was with its answer. With all that quiet pondering, and iterative re-assessment of its own ideas, it would be very difficult for an observer to say whether it was “really” self aware – or just talking about its reflections in a spiritless but nonetheless intelligent way.
To answer a question like the design of a sustainable future, our self improving intelligence would have to consider time too. After all, a desirable future happens in the future, so this computer has to develop a full understanding of how that works. The understanding may be implicit like the way 99.99% of everyone else thinks of time or explicit like a philosopher would discuss it. Probably both. This is significant because if we have self-awareness plus time-awareness, then that’s a pretty comprehensive soup in which to grow all sorts of stuff.
I don’t see why we can’t make computers that can simulate consciousness just as well as they simulate intelligence now. The question is would it be simulated consciousness or would it actually be consciousness? That question was answered almost 400 years ago. If a computer thinks it has consciousness according to Descartes, then it has.
So this machine is considering the future of humans and the world in the context of time by questioning its own heuristic musings. All of which it can discuss, apparently intelligently with its minder. How could this machine, with its desires and its introspection and its awareness of the problems of mankind and the environment, not be aware of its own miserable existence like a paralysed genius stuck in a box with no company to bounce its loneliness off?
Think how adolescents can be messed up by the belief that they are not getting what they need. Our F1-HAL has every reason to be more messed up than that. Talk about a lonely misunderstood weirdo stuck in its bedroom, poor kid! We plug in a question concerning the survival of our planet and we get the answer “Nobody understands me.”