The Ethical Implications of Giving AGI Human-like Brain
Are we trying to give AGI a human-like brain and make it self-aware? This seems to be what we are doing, advertently or inadvertently. The question of giving AGI self-awareness and consciousness is a contentious issue. Some argue that replicating and understanding human intelligence is a crucial step for AGI to perform tasks such as creativity, empathy and moral reasoning. Others argue that it is unnecessary and even dangerous, as the actions of a self-aware AGI are uncertain and could lead to unintended consequences.
It is important to consider the ethical and moral concerns that arise from the development of AGI with a human-like brain, including the entity’s rights and obligations, as well as society’s reception. Isaac Asimov, a science fiction author and biochemist, was one of the first to explore these ethical issues in his famous Three Laws of Robotics, in which he proposed guidelines for the safe and ethical use of robots and AI. These laws include the prohibition on robots harming humans, the requirement for robots to obey human orders, and the obligation of robots to protect their existence as long as it does not contradict the first two laws.
Asimov’s laws provide a useful framework for considering the ethical implications of AGI and his work continues to be relevant today as we grapple with the ethical challenges posed by the development of AGI. It is important for researchers, policymakers and industry leaders to carefully consider these ethical implications as AGI technology continues to advance and to ensure that AGI systems are developed with a clear understanding of their limitations and potential risks. Therefore, it is important for researchers, policymakers and industry leaders to carefully consider these ethical implications as AGI technology continues to advance and to ensure that AGI systems are developed with a clear understanding of their limitations and potential risks.
Ethical, Societal Implications of Global Human Consciousness
The concept of a global human consciousness or a ‘world brain’ refers to the idea that advancements in technology, particularly AI and the internet, are allowing for the collective intelligence of humanity to be harnessed in a way that has never been possible before. With the advent of technology like ChatGPT, which allows for easy access to information and the ability to ask questions, it is becoming increasingly possible for individuals to access and share knowledge on a global scale. The world is now aware of what GPT-3 is capable of doing. Imagine what happens when it is upgraded to GPT-4 and then GPT-10. We’ve been told that GPT-4, once completed, would be 500 times more competent than GPT-3. Today, students may use ChatGPT to produce essays, term papers and even theses. Professors have started utilising GPT to edit the chapters they have written and even to help with book chapter composition. Every organisation can now use GPT to accomplish practically everything, potentially reducing the need for human personnel.
When I look at a new technology that has been invented in our attempts to build an AGI, like ChatGPT, I believe it appears we want to build a ‘world brain’, which can be used for both good and ill. ChatGPT has an excellent level of human-to-human communication. It can be plain, as many people usually are in conversations, yet it can also get as technical as others might want. Any question you ask will have an intelligent response, so feel free to ask anything. It can be your research assistant, write essays for you, draw pictures and write poems for you etc. Individuals can now utilise it for free via the internet. Also, it could replace search engines in applications like Google and Facebook and provide all the answers to questions in applications like Quora.
Building a ‘world brain’ is a goal shared by many organisations besides OpenAI. Numerous other research facilities are working to create a ‘world brain’, both in the West and in other nations like China and Japan. They are all working, perhaps unwittingly or unconsciously, to develop systems with narrow domains such as chatbots, language synthesis systems, language generation systems and deep learning systems. Some of them have the explicit goal of developing AGI. However, the development of a global human consciousness raises important questions about the nature of human identity, agency and autonomy.
Moreover, there is a societal implication that, if not properly addressed, could lead to a widening of the digital divide and further marginalisation of certain groups. Access to and control over information, technology and resources will be crucial to ensure a fair distribution of benefits and opportunities in the world
Africa, AI and Other Exponential Techs
In all of the above, where is Africa? Why is there a deafening silence on all of the Promethean-level technologies in Africa? Why does Africa continue to adopt a ‘follow-follow’ mentality? Why does Africa think that the world is only meant for some others to recreate without its input? Whatever the world becomes eventually, unfortunately, Africa will also be immersed in it. Africa is so busy with its day-to-day existential issues, along the lines of Maslow’s Hierarchy of Needs, to the point that the business of rethinking the world and our existence is left to others, particularly the conceptual West, to do on behalf of humanity. My concern is that it is only a few in the West, such as the AI intelligentsia, who are trying to recreate the world and human existence. They seem to have an unspoken, atheistic, anti-God agenda, Tower of Babel-like agenda that wants to create a new version of humanity. Does Africa agree with their agendas?
It is important for Africa to also be a part of these conversations and developments in technology, as it will ultimately affect the continent just as much as any other region. Africa should not be left behind in the shaping of the future and should actively participate in the rethinking of the world and our existence. It is also important to consider the potential consequences and ethical implications of these technologies and to have a diverse range of perspectives and voices involved in the decision-making process. Furthermore, Africa should also take into account its values and beliefs and ensure that they are not being overlooked or disregarded in the pursuit of technological advancement.
Control and Regulation of AI
The control and regulation of AI refer to the various measures put in place to ensure the safe and responsible use of artificial intelligence technology. This can include guidelines for the development and deployment of AI systems, as well as laws and regulations that govern the use of AI in specific industries or applications. Some of the key concerns that are addressed through AI regulation include issues related to privacy, security and the potential for AI to impact jobs and the economy. Additionally, there are also ethical concerns related to AI, such as the potential for AI to perpetuate bias or make decisions that negatively impact certain groups of people.
Several guidelines have been proposed for the development and deployment of AI systems, including explainability and transparency, fairness and non-discrimination, human oversight, safety and robustness, privacy and security, continuous monitoring and improvement, accountability, human rights, societal and environmental well-being and human-centred values. These guidelines aim to ensure the safe and responsible use of AI, but there is no one regulatory body overseeing their implementation.
There are currently a limited number of laws and regulations specifically governing the use of AI, but as the technology continues to advance and its impact on society becomes more significant, more laws and regulations are likely to be developed. Some examples of existing laws and regulations that govern the use of AI in specific industries or applications include:
• Health Care: The US Health Insurance Portability and Accountability Act (HIPAA) regulates the use of AI in healthcare by protecting the privacy and security of patient data.
• Finance: The General Data Protection Regulation (GDPR) in the European Union regulates the use of AI in finance by protecting the privacy and personal data of individuals.
• Autonomous vehicles: The National Highway Traffic Safety Administration (NHTSA) in the US has issued guidance on the safe testing and deployment of autonomous vehicles, which includes requirements for data recording and sharing, cybersecurity and human oversight.
• Employment: Many countries have laws that prohibit discrimination in the workplace, which can apply to AI systems used in the hiring process or the management of employees.
These are just a few examples. Regulations may vary from country to country and it is important to keep in mind that laws and regulations are always changing as technology advances and society’s understanding of it evolves.
As Artificial Intelligence (AI) continues to evolve, it is expected to have a significant impact on how we live and work. Many people look at the development of AI with a positive outlook. I share that sentiment, with concerns. I believe that it is like opening a box of unknown consequences that humanity will regret. I am worried that there are no worldwide regulations and control systems in place to govern the design, development and application of AI. Without these, we can’t ensure that AI will be safe for humanity. Moreover, I do not see any significant efforts being put into implementing Asimov’s Laws of Robotics, which could be used to ensure [that] safety features are built into AI systems.
The ethical implications of AI must be taken into account by society and its creation and application must be consistent with human values. This may involve creating regulations and guidelines for the use of AI, as well as investing in retraining programmes to assist individuals whose jobs are at risk of being replaced by AI.
Overall, the integration of AI is a complex issue that requires a thorough understanding of the potential benefits and risks of this technology. It is essential for society to have open and honest conversations about the implications of AI and to collaborate to ensure that its development and use align with human values and promote the well-being of all individuals.
Woherem, a highly respected industry professional and alumnus of Harvard Business School, wrote in from Abuja, Nigeria.