As artificial intelligence (AI) continues to evolve, it is expected to have a significant impact on how we live and work. The integration of AI is a complex issue that requires a thorough understanding of the potential benefits and risks.
I have always believed that artificial intelligence (AI) is a Promethean technology that can be used for good or evil. I, therefore, feel that while we welcome its positive uses, the negative impacts could be too dangerous for humanity to control or endure and, as such, we should have international treaties and controls put in place to regulate the technology. With the arrival of ChatGPT, the promise of artificial general intelligence (AGI) and, possibly, artificial super intelligent (ASI) systems, I am even more convinced that some internationally agreed sets of controls and laws guiding the behaviour of such systems and hardwired into the AI systems, should be implemented as soon as possible.
I started studying artificial intelligence (AI) in 1983 when I enrolled in the master’s degree program in cognition, computing, and psychology at the University of Warwick in England. The course was all about AI – how to build AI systems by studying humans to determine what makes them intelligent and then applying that knowledge to develop intelligence-based systems. Since then, I have been keeping track of the development of artificial intelligence and its uses in various industries. The knowledge and understanding of what makes humans intelligent have been applied to the development of AI systems capable of performing tasks previously thought to be unique to human intelligence. It is crucial to consider any possible disadvantages of these intelligence-based systems as we continue to push the limits of AI. That is why it is significant to develop internationally agreed-upon sets of controls and laws to govern the development and use of AI or intelligence-based systems.
Intelligence-based systems, also known as cognitive systems, imitate human intelligence and execute activities that typically require human intelligence – such as interpreting a spoken language, identifying objects or photographs, making judgments and learning from prior experiences. When such systems are built, those who believe in the strong AI hypothesis refer to them as having ‘human intelligence’ or ‘intentionality’. On the other hand, those who believe in the weak AI hypotheses view intelligence-based systems as a tool or technology used to automate or enhance specific jobs rather than as an end in itself. They see AI as a tool for better decision-making, increased productivity and the provision of new insights and information. However, to those that believe in the strong hypothesis, if a system can display all the cognitive traits of people, it should be considered human.
During my study, I came to believe that humans are very creative and capable of building any of the systems mentioned above over time. However, I did not believe in the strong hypothesis; rather, I believed – and still do – in the weak hypothesis of AI, as did people like Professor John Searle, who is widely known for his ‘Chinese Room’ thought experiment.
In his thought experiment, Prof. Searle proposes that a person in an enclosed room who does not speak Chinese be given a set of rules and symbols that allow him or her to communicate in Chinese with a Chinese speaker outside the room. The person in the room can receive Chinese words, convert them into English, understand the communication, respond in English and, then, convert the response back into Chinese. It appears that the person in the room speaks Chinese fluently to the Chinese speaker outside the room. However, the person in the room does not understand Chinese; he or she is simply manipulating symbols according to a set of rules. Searle argues that this thought experiment illustrates that a machine or algorithm can simulate an understanding of a natural language without actually understanding it. He further argues that this is true of AI systems in general: they can display traits associated with human intelligence, but this does not mean they possess genuine human intelligence or consciousness. He suggests that the true objective of AI should be to produce machines that can perform specialised tasks efficiently, rather than try to build machines that can fully comprehend and have consciousness like humans.
John Searle’s opinions on the weak AI hypothesis have been widely studied and considered significant in AI.
Hubert Dreyfus, a philosopher and Daniel Dennett, a cognitive scientist, are two well-known individuals who have endorsed Searle’s opinions. It is important to note that this thought experiment raises many questions and criticisms. Some argue that this thought experiment is not a fair representation of AI, that it doesn’t take cognizance of the complexity of the human mind and consciousness, and that the ‘Chinese Room’ thought experiment is not a fair representation of how AI systems work. I believe that Prof. Searle is right; AI systems are mere powerful symbol manipulators. However, that they are merely manipulating symbols does not detract from the fact that humans can judge their capabilities as they like and, so, can easily declare them to be human or even super-human.
Motivations of the AI Intelligentsia
The motivations of the AI intelligentsia – researchers, engineers and scientists who work in the field of AI – include scientific curiosity, technical challenges, commercial opportunities, social impact, national security and ethical considerations. They are thus playing a significant role in shaping the future of humanity by advancing AI technology and developing artificial general intelligence (AGI) systems that have the potential to revolutionise many aspects of daily life for people.
The AI intelligentsia is taking humanity to a new epoch – to an age of intelligent-based systems that will co-exist with humans at home, at workplaces and in social environments, or who can eventually rule over humans. It is a new dawn, a brave new world that is about to be revealed, as we redefine the origins and nature of sapiens, taking us to a new stage of human evolution, in which humans will evolve into cybernetically and genetically engineered ‘Homo Deus’, as proposed or mentioned by thinkers such as Yuval Noah Harari, Dan Brown, Michio Kaku and Ray Kurzweil.
Kurzweil believes that AGI will surpass human intelligence in a wide range of tasks and could be used to solve problems like curing diseases, terraforming other planets and overcoming death. Kurzweil’s expected new book Singularity is Nearer, will expand on his previous ideas and predictions of AI. He previously stated that he believes computers will be able to pass the Turing test, which measures a machine’s ability to exhibit intelligent behaviour comparable to or indistinguishable from that of a human, by 2029. Also, he predicted that by 2045, AGI will be achieved and will be able to improve itself at an exponential rate, leading to a rapid acceleration of technological progress and, perhaps, to the AGI turning into an ASI (artificial super intelligent) system.
Also, Harari suggests in his book Homo Deus: A Brief History of Tomorrow that advanced brain-computer interfaces (BCIs) could be a step towards the development of artificial general intelligence (AGI). He claims that, by directly connecting human brains to computers and other machines, we can improve our cognitive abilities and eventually create AGI systems that match or even exceed human intelligence. It is critical to remember that AGI is a speculative topic and the notion that BCIs would be a step toward its development is purely theoretical. There are different approaches to creating AGI and the relationship between BCI and AGI is not yet well understood. However, the idea that BCIs could significantly improve human cognitive abilities is intriguing and warrants further investigation.
On the contrary, Martin Ford argues in his book The Rise of the Robots: Technology and the Threat of a Jobless Future that the development of AGI could have an impact on the job market because machines could potentially take over many tasks currently performed by humans. In particular, he asserts that low-skilled and repetitive jobs will suffer significantly from widespread automation and the rise of AGI. Also, he suggests that AGI could lead to greater inequality as the people who own and control these technologies become increasingly wealthy and powerful while a growing number of people may become unemployed and left behind.
He believes society should start preparing for these changes now by investing in education and training programmes and enacting new policies to help people adjust to a rapidly changing job market. It is worth noting that this is a speculative topic and that there are many different opinions about the potential impacts of AGI on the job market and the economy as a whole. Some experts believe that AGI has the potential to create new job opportunities and boost economic growth, while others believe it will lead to job displacement and widen the gap between the rich and the poor.
Woherem, a highly respected industry professional and alumnus of Harvard Business School, wrote in from Abuja, Nigeria.