Artificial Intelligence, UN & Global Safety 

Artificial Intelligence AI

In October 2023, President Joe Biden’s Executive Order aimed to ensure the safety and accessibility of artificial intelligence (AI) for all humanity, anticipating potential future dependencies on AI technologies. This initiative responded to concerns, such as the ongoing legal battle between the United States and Apple over perceived monopolistic practices. The order set the stage for a global conversation on AI safety and accessibility.

Building upon this, a coalition of 123 nations, led by the United States, presented a position paper at the UN General Assembly on March 21, 2024. The proposal received overwhelming support, with all 193 member nations endorsing it without the need for a vote. This landmark resolution was hailed as historic by U.S. Vice President Kamala Harris and National Security Advisor Jake Sullivan, emphasising its role in establishing principles for the safe use of AI.

Secretary of State Antony Blinken lauded the resolution as a groundbreaking effort, marking the first global approach to AI development and usage. It signalled a collective endeavour to ensure that AI benefits all nations, upholds human rights, and prioritises safety, security, and trustworthiness. This endorsement by the UN General Assembly lent significant weight to international efforts aimed at regulating AI.

While the International Telecommunications Union (ITU) had previously been instrumental in promoting standards and regulations for AI development, the UN resolution provided additional support to such endeavours. By aligning with ITU positions, the resolution further solidified the global commitment to fostering responsible AI innovation.

The UN General Assembly’s endorsement of the AI resolution marked a pivotal moment in global efforts to harness the potential of AI while safeguarding against potential risks. It underscored the importance of international cooperation in shaping the future of AI for the benefit of humanity.

In a statement emphasising the imperative alignment of AI with public welfare, Harris stressed the necessity for its adoption and advancement to safeguard against potential harm while ensuring equitable access to its benefits.

“AI must be in the public interest – it must be adopted and advanced in a way that protects everyone from potential harm and ensures everyone can enjoy its benefits,” she said.

Echoing this sentiment during the General Assembly meeting last September, President Biden affirmed the United States’ commitment to collaborate globally in harnessing AI for positive outcomes while mitigating profound risks to citizens.

The culmination of this commitment came to fruition on October 30, 2023, with President Biden’s signing of the Executive Order, laying the groundwork for the initiative unveiled on Thursday, March 21, 2024. This initiative, born out of the Biden-Harris administration’s holistic approach to responsible innovation, builds upon prior efforts, including securing voluntary commitments from 15 major corporations to prioritise the safe and secure development of AI technologies.

Central to the Executive Order is the mandate for developers of potent AI systems to disclose safety test results and critical information to the U.S. government. Under the Defense Production Act, companies working on foundational AI models with implications for national security or public health must notify and share red-team safety test outcomes with federal authorities, ensuring the safety and trustworthiness of AI systems before their public release.

Furthermore, the Order tasks relevant agencies with establishing stringent standards, tools and tests to uphold the safety and security of AI systems. The National Institute of Standards and Technology (NIST) will spearhead the development of rigorous testing protocols, while the Department of Homeland Security will oversee their implementation in critical infrastructure sectors, bolstered by the creation of the AI Safety and Security Board. Concurrently, the Departments of Energy and Homeland Security will address AI-related threats to critical infrastructure and cybersecurity risks, marking unprecedented governmental efforts in advancing AI safety measures.

Moreover, recognising the potential misuse of AI in engineering hazardous biological materials, the Executive Order directs the formulation of robust screening standards for biological synthesis. Agencies funding life-science projects will incorporate these standards into federal funding criteria, incentivising comprehensive screening practices to manage risks exacerbated by AI technology.

Through these multifaceted actions, the government aims to foster an AI landscape that prioritises public welfare, security and trust, setting a global precedent for responsible AI innovation.

Protecting Americans from the risks associated with AI-enabled fraud and deception is a crucial step in safeguarding digital integrity. To achieve this, the executive order calls for the establishment of robust standards and best practices aimed at detecting AI-generated content and verifying the authenticity of official communications. The Department of Commerce is tasked with developing comprehensive guidance for content authentication and watermarking, ensuring clear labelling of AI-generated content. This initiative aims to instil confidence in Americans regarding the authenticity of government communications, setting a precedent for both the private sector and governments worldwide.

Moreover, the executive order emphasises the importance of bolstering cybersecurity through advanced AI capabilities. It calls for the development of an advanced cybersecurity programme focused on leveraging AI tools to identify and rectify vulnerabilities in critical software systems. Building upon the Biden-Harris administration’s AI Cyber Challenge, this programme seeks to harness AI’s transformative potential in enhancing the security of software and networks.

In addition to domestic efforts, the executive order mandates the creation of a National Security Memorandum to guide further actions concerning AI and security. This memorandum, to be developed by the National Security Council and White House Chief of Staff, aims to ensure the safe, ethical and effective use of AI by the U.S. military and intelligence community. It also directs measures to counter adversaries’ military utilisation of AI, thus maintaining strategic advantage and security.

While the executive order primarily focuses on U.S. initiatives, its impact extends globally, as evidenced by the adoption of a resolution by the U.N. General Assembly. The resolution serves as a guiding framework for all nations in navigating the complexities of AI governance. U.S. Ambassador Linda Thomas-Greenfield highlighted the significance of global consensus, emphasising the need to govern AI collectively rather than allowing it to dictate outcomes.

The collaborative nature of the resolution’s development underscores the shared recognition of AI’s profound implications across nations. Ambassador Thomas-Greenfield emphasised the rapid pace of technological advancement and its implications, particularly for developing countries. She stressed the importance of providing assurances regarding access to AI technologies, thereby fostering confidence and enabling global participation in shaping its governance.

Following the resolution’s adoption, representatives from various nations expressed enthusiastic support, echoing the sentiment that it marks a significant milestone in multilateral cooperation. Thomas-Greenfield reiterated the resolution’s importance as an initial step, acknowledging its role in laying the foundation for future governance frameworks.

“In a moment in which the world is seen to be agreeing on little, perhaps the most quietly radical aspect of this resolution is the wide consensus forged in the name of advancing progress,” Thomas-Greenfield told the assembly just before the vote.

 “The United Nations and artificial intelligence are contemporaries, both born in the years following the Second World War,” she said. “The two have grown and evolved in parallel. Today, as the U.N. and AI finally intersect we have the opportunity and the responsibility to choose as one united global community to govern this technology rather than let it govern us.”

The executive order and the subsequent U.N. resolution represent concerted efforts to address the challenges and opportunities presented by AI on both national and global scales. By prioritising cybersecurity, promoting transparency and fostering international collaboration, these initiatives aim to harness the potential of AI while mitigating associated risks, thereby shaping a more secure and inclusive digital future.

The International Telecommunication Union (ITU) is spearheading a visionary approach towards the integration of artificial intelligence (AI) into various aspects of our lives, foreseeing a future where AI technologies will wield significant influence. AI’s capacity to execute tasks with precision, learn and make informed decisions is rapidly evolving, challenging previous notions about the limitations of automation.

ITU predicts that AI innovation will play a pivotal role in advancing the United Nations’ Sustainable Development Goals (SDGs), leveraging the vast amounts of data generated across multiple domains such as sentiment analysis, healthcare, commerce and communication. To facilitate this vision, ITU aims to provide a neutral platform for stakeholders from government, industry, and academia to collaboratively explore emerging AI capabilities, advocating for technical standardisation and policy guidance.

As Nigeria commemorates the first year of a new regime, global attention will be drawn to Geneva, where discussions among technology leaders will revolve around AI governance. The AI Governance Day, scheduled for May 29, will convene representatives from governments, businesses, academia, civil society and UN agencies to translate dialogues on AI governance into actionable strategies.

Following this, from May 30 to 31, the AI for Good Global Summit 2024 will unite global leaders and innovators in AI with the humanitarian community in Geneva. This summit aims to harness the potential of AI technology to foster sustainable development. Highlighting innovations in generative AI, robotics and brain-machine interfaces, the event will emphasise its role in addressing critical challenges such as climate action, accessibility, healthcare and disaster response.

Distinguished speakers, including prominent figures in AI, will explore recent advancements and deliberate on strategies to ensure that AI contributes positively to humanity. The ITU secretary-general, Doreen Bogdan-Martin emphasises the importance of the AI for Good Global Summit as a catalyst for inclusive growth and sustainable progress. The summit serves as a driving force towards achieving the UN Sustainable Development Goals by 2030.

“ITU’s yearly AI for Good Global Summit brings together a diverse set of voices to look at the latest AI developments and find ways to ensure this technology remains a force for good, driving inclusive growth and sustainable and equitable progress for all,” said Bogdan-Martin. “This summit and our year-round AI for Good platform are powerful tools for accelerating progress in our race to achieve the UN Sustainable Development Goals by 2030.”

ITU’s strategic initiatives underscore the transformative potential of AI in shaping the future of society. By fostering collaboration and dialogue among diverse stakeholders, ITU aims to steer AI development towards serving the common good, driving progress towards sustainable development goals, and ensuring equitable outcomes for all.

Sonny Aragba-Akpore
+ posts

Leave a Reply

get in touch

1,815FansLike
101FollowersFollow
47FollowersFollow

Latest News

Related Articles