spot_imgspot_img
=

Standardising Regulations For Safe, Ethical Artificial Intelligence

- Advertisement -
- Advertisement -
- Advertisement -
- Advertisement -
Artificial Intelligence AI

In the ever-evolving landscape of technology, the role of artificial intelligence (AI) has become increasingly prominent, promising transformative benefits across various sectors. However, with this surge in AI capabilities, concerns about standardisation, safety and ethical regulations have taken centre stage. This article delves into the global efforts to address these concerns, with a particular focus on recent developments in the United States, the United Kingdom and the upcoming AI for Good Global Summit in Geneva.

The International Telecommunications Union (ITU) set the stage for the conversation on AI standardisation, safety and regulatory processes with its Artificial Intelligence for Good Global Summit held in Geneva, Switzerland, on July 6-7, 2023. This summit aimed to bring together key stakeholders, including representatives from governments, companies, academia, civil society and UN agencies, to chart a course for the responsible development and deployment of AI.

Following the Geneva summit, similar gatherings worldwide, including one in Dubai, UAE, have contributed to shaping the global direction for AI standardisation. As we approach Nigeria’s first year under a new regime on May 29, 2024, global technology leaders are expected to convene once again in Geneva for AI Governance Day. This event, as outlined in ITU documents, seeks to transform dialogues on AI governance into impactful actions, emphasising collaboration across diverse sectors.

These international initiatives underscore the global recognition of the need for cohesive, ethical and universally accepted guidelines to govern the development and deployment of AI technologies.

On October 30, 2023, U.S. President Joe Biden signed an Executive Order (EO) introducing comprehensive measures to ensure the safety, security and trustworthiness of AI systems. Under the Defense Production Act, the EO mandates that developers of powerful AI systems share safety test results and critical information with the U.S. government.

One of the pivotal aspects of the EO is the emphasis on developing standards, tools and tests to guarantee the safety of AI systems. The National Institute of Standards and Technology (NIST) will play a key role in setting rigorous standards for extensive red-team testing before the public release of AI systems. The Department of Homeland Security will apply these standards to critical infrastructure sectors and an AI Safety and Security Board will be established to oversee these measures.

The EO also addresses the risks associated with using AI to engineer dangerous biological materials. It calls for the development of robust standards for biological synthesis screening to mitigate potential risks exacerbated by AI.

Furthermore, the EO aims to protect against AI-enabled fraud and deception by establishing standards and best practices for detecting AI-generated content. The Department of Commerce will guide content authentication and watermarking, ensuring that AI-generated content is clearly labelled. This initiative sets a precedent for government-led efforts to combat deceptive practices enabled by AI.

The overarching goal of these measures is to harness AI’s potential while ensuring safety, security and ethical use. The comprehensive approach adopted by the U.S. government represents one of the most significant actions taken globally to advance the field of AI safety.

In the United Kingdom, efforts to address AI challenges extend beyond regulatory measures. The establishment of the Office for Artificial Intelligence, now part of the AI Policy Directorate in the Department for Science, Innovation and Technology, reflects the UK’s commitment to fostering a conducive environment for AI development.

Moreover, on November 15, 2023, the UK government announced £17 million in funding for scholarships on AI and data science conversion courses. This funding aims to support underrepresented groups, including women, black individuals, people with disabilities and those from disadvantaged socioeconomic backgrounds, in entering the AI industry.

Companies are encouraged to contribute to the funding, creating additional scholarships for AI and data science conversion courses. The focus on diversity and inclusivity in the tech industry is crucial for building a workforce that reflects a wide range of perspectives and experiences.

This initiative aligns with the broader goal of strengthening the UK’s position as a global leader in Artificial Intelligence. By providing scholarships for AI and data science conversion courses, the government seeks to accelerate the entry of underrepresented groups into the AI job market, thereby enriching the talent pool and driving innovation.

From May 30 to 31, the AI for Good Global Summit 2024 will take place in Geneva, Switzerland, bringing together global leaders, innovators, and the humanitarian community. This summit serves as a platform to explore how new AI technologies can drive sustainable development across various domains.

The event will showcase innovations in generative AI, robotics and brain-machine interfaces, highlighting their potential to accelerate progress in climate action, accessibility, health and disaster response. The summit will feature discussions by AI luminaries, masterclasses covering a range of AI topics and exhibitions showcasing cutting-edge AI demos.

ITU’s AI for Good Global Summit, co-convened with 40 partner UN agencies, plays a vital role in identifying practical applications of AI to achieve the UN Sustainable Development Goals. The discussions at the summit foster international cooperation, innovation and equitable access to new technologies.

In addition to talks by AI thought leaders, the summit will host machine learning masterclasses covering topics from deepfakes and climate change to brain-machine interfaces. Start-ups, young innovators and creatives will have the opportunity to showcase their ideas at the AI for Good Innovation Factory Grand Finale, Robotics for Good Youth Challenge and Canvas of the Future art contest.

The exhibition space will feature cutting-edge demos, including AI for accessibility, collective drone swarms, bio-inspired rescue robots, a RoboCup robot football tournament, performance-boosting exoskeletons and AI-inspired art. Notably, the exhibition will include demos of brain-machine interfaces, a groundbreaking AI advancement with the potential to open new frontiers in neurotechnology.

A press conference on brain-machine interfaces will provide insights into technologies enabling mind-controlled movement and communication for persons with disabilities. This highlights the intersection of AI and neurotechnology, showcasing how AI advancements can positively impact human performance, mental health and overall well-being.

The global initiatives in the United States, the United Kingdom and the international community emphasise the multifaceted approach needed to address the challenges and opportunities presented by AI. As the world navigates the complex terrain of AI development, these concerted efforts aim to establish a framework that ensures the responsible, ethical and safe use of AI technologies for the benefit of humanity. The AI for Good Global Summit stands as a testament to the collaborative spirit driving progress in AI, with the hope of shaping a future where technology serves as a force for good.

Sonny Aragba-Akpore
+ posts
- Advertisement -

Leave a Reply

get in touch

1,815FansLike
101FollowersFollow
47FollowersFollow

Latest News

Related Articles