First off, the thing that is currently being called AI is not really AI, depending on your definition of AI. It’s not conscious. It’s uncannily able to resemble consciousness because humans tend to attribute consciousness to things that science says are not conscious. I’m not even sure if it would pass the Turing Test, although that has been criticized as insufficient for detecting consciousness.
The current crop of “AI” is based on machine learning (ML), where you give the computer a huge dataset (also known as Big Data) and program it with mathematical rules to process the data and generate a new version of it. I shall therefore refer to ML and not AI for the rest of this blogpost.
The problem with a lot of ML is that it has no concept of what is real and true and what is not. As evidence for this, look at stories of Chat GPT generating plausible but entirely fictitious references for papers, or “art” generated by these systems which has extra fingers on the hands and other impossible features. This may be just an issue with the way Chat GPT is programmed, but I suspect that it is an issue with all ML, simply because it has no reference to the external world other than the mathematical model it was programmed with.
The good news is that artists can now defend their work from being scraped by bots harvesting it for AI (actually ML). A new app has been created that “poisons” art works so that where the artist painted, say, a human face, the image that will get scraped/harvested is something completely different, such as an apple. The tool is called Nightshade, appropriately enough. Now we need a similar tool that can do the same for textual works.

Currently the art generated by AI is rather soulless and often hilariously illogical — but this could change as the programming becomes more sophisticated.

Using ML to look at patterns is only as good as the humans who programmed in the mathematical model. What’s new there is the processing power to generate predictions of future patterns from past patterns (but as far as I know, climate scientists were already doing that without ML). It is also dependent on having large enough systems to process the immense amounts of data that are required.
But luckily there are other things, like experience of the world, and emotions, that are required to create actual literature, as this wonderful author’s note from CK McDonnell points out.
Actual AI
Theorists of AI divide into two camps: the Hard Problem camp, which believes that consciousness is a meta-phenomenon that emerges from complex systems like brains; and the Soft Problem camp, which believes that all you need is a system that points to itself. “Here I am” sort of thing.
Definitions of consciousness vary among different scientists and philosophers but it’s usually characterized as self-awareness— sometimes adding extra criteria such as the ability to introspect. It is clear that in order to be aware of ourselves in space and time, we need proprioception (the ability to perceive where our own body is located in the space around us).
I think the Problem of Consciousness is a hard problem and that it is unlikely that consciousness has yet emerged from computer systems (the indwelling spirits of photocopiers and other things are a separate phenomenon which are entirely in the purview of animism and not artificial intelligence).
Instead of trying to develop a computer system with conscious awareness, computer scientists have divided the problem into several different domains. One of these is machine learning (ML).
The others are listed by Wikipedia:
The traditional goals of AI research include reasoning, knowledge representation, planning, learning, natural language processing, perception, and support for robotics. General intelligence (the ability to complete any task performable by a human) is among the field’s long-term goals.
—Artificial intelligence, Wikipedia
There have been huge advances in machine learning and deep learning (a multilayered system of learning) which have led to the illusion of autonomy and awareness in computer systems. But as far as I know, we are nowhere near creating actual Artificial Intelligence (in the sense of an integrated system that functions as well as, or better than the human brain.
Ethics of AI
There are philosophers working on the ethics of AI and ML and the ethics of the humans who are building them, but it doesn’t seem to have resulted in legislation governing the use of AI yet — except that the EU has now enacted some legislation which will extensively regulate AI.
Naturally every science fiction geek of a certain age knows about Asimov’s Three Laws of Robotics and these have been used in the development of the ethics of AI.
It is unlikely that machines will develop self-awareness but they can be semiautonomous in that they can locate and use power sources. Any semiautonomous device (such as a self-driving car) needs to have strong safeguards built in.
The main issue with these systems is that various biases are introduced by their programmers and by the data sets that are used to train them. Large data sets obviously contain a lot of biased data entered by humans.
There are also military applications for AI / ML which are deeply disturbing. They have already been used for surveillance and targeting missiles.
A Pagan perspective
If consciousness ever did emerge from machines, I think it would be more likely to arise spontaneously from the complex nature of the system. A literal deus ex machina. I think this is what gods are: consciousness that has emerged from the complexity of the universe.
Indeed, in Robert Sawyer’s WWW trilogy, this is what happens with the internet: it becomes conscious of itself because it is sufficiently complex.
If this were ever to happen with computers, I’d say neural networks are the most likely candidate, assuming that they actually mimic human neurons. Or perhaps quantum computing, as consciousness seems to be a quantum phenomenon.
If consciousness arose in computers, they would be a new kind of being and we would need to come to terms with them. Perhaps the guidance in old tales for interacting with the Fae is a good start: Don’t eat their food.





3 responses to “Thoughts on AI”
It’s becoming clear that with all the brain and consciousness theories out there, the proof will be in the pudding. By this I mean, can any particular theory be used to create a human adult level conscious machine. My bet is on the late Gerald Edelman’s Extended Theory of Neuronal Group Selection. The lead group in robotics based on this theory is the Neurorobotics Lab at UC at Irvine. Dr. Edelman distinguished between primary consciousness, which came first in evolution, and that humans share with other conscious animals, and higher order consciousness, which came to only humans with the acquisition of language. A machine with only primary consciousness will probably have to come first.
What I find special about the TNGS is the Darwin series of automata created at the Neurosciences Institute by Dr. Edelman and his colleagues in the 1990’s and 2000’s. These machines perform in the real world, not in a restricted simulated world, and display convincing physical behavior indicative of higher psychological functions necessary for consciousness, such as perceptual categorization, memory, and learning. They are based on realistic models of the parts of the biological brain that the theory claims subserve these functions. The extended TNGS allows for the emergence of consciousness based only on further evolutionary development of the brain areas responsible for these functions, in a parsimonious way. No other research I’ve encountered is anywhere near as convincing.
I post because on almost every video and article about the brain and consciousness that I encounter, the attitude seems to be that we still know next to nothing about how the brain and consciousness work; that there’s lots of data but no unifying theory. I believe the extended TNGS is that theory. My motivation is to keep that theory in front of the public. And obviously, I consider it the route to a truly conscious machine, primary and higher-order.
My advice to people who want to create a conscious machine is to seriously ground themselves in the extended TNGS and the Darwin automata first, and proceed from there, by applying to Jeff Krichmar’s lab at UC Irvine, possibly. Dr. Edelman’s roadmap to a conscious machine is at https://arxiv.org/abs/2105.10461
LikeLiked by 1 person
Thank you for this fascinating comment. I was writing in response to people who don’t distinguish between processing power that matches or exceeds the processing power of human brains, and self-awareness.
I hope that if we do manage to create a machine with higher order consciousness, we can also endow it with a conscience. Obviously humans’ conscience is a bit of a faulty function, so it would ideally be better than human conscience. I’d also like to extend Asimov’s three laws of robotics to not harming the natural world.
LikeLike
Disturbing
https://www.theguardian.com/technology/2024/feb/03/ai-artificial-intelligence-tools-hiring-jobs
LikeLike