Contact Info

Atlas Cloud LLC 600 Cleveland Street Suite 348 Clearwater, FL 33755 USA

[email protected]

Client Area
Recommended Services
Supported Scripts
WordPress
Hubspot
Joomla
Drupal
Wix
Shopify
Magento
Typeo3

Governments are striving to implement safeguards for artificial intelligence, however, challenges and hesitations hinder international consensus on key priorities and risks.

In November 2023, Great Britain released the Bletchley Declaration, committing to enhance worldwide collaboration on AI safety with 28 nations, including the United States, China, and the European Union.

In May, further efforts to frame AI safety regulations occurred at the second Global AI Summit, where the U.K. and the Republic of Korea garnered support from 16 leading global AI technology firms for a series of safety protocols, expanding upon earlier commitments.

“The Declaration achieves crucial objectives of the summit by setting up a collective understanding and accountability concerning the dangers and prospects, and advancing a path for international cooperation on cutting-edge AI safety and research, especially through intensified scientific partnerships,” stated the United Kingdom in a statement accompanying the declaration.

The European Union’s AI Act, implemented in May, marks the first significant global regulation of AI. It asserts enforcement capabilities and potential penalties, including fines up to $38 million or 7% of their yearly worldwide income for entities violating the regulation.

In reaction, a bipartisan cohort of U.S. senators has urged Congress to allocate $32 billion in urgent funding for AI, along with a report emphasizing the necessity for the U.S. to exploit AI benefits while mitigating associated dangers.

“It’s crucial for governments to engage with AI, especially regarding national security concerns. Capitalizing on AI’s potentials while addressing its risks demands significant government effort and financial investment,” said Joseph Thacker, a lead AI engineer and security expert at SaaS security firm AppOmni, in a discussion with TechNewsWorld.

AI safety is becoming increasingly critical. Thacker mentioned that almost every software, including AI services, is currently offered as a software-as-a-service (SaaS) solution. Thus, the security and reliability of these SaaS systems are paramount.

“We need robust security measures for SaaS applications. Investing in SaaS security should be a top priority for any company developing or deploying AI,” he offered.

Existing SaaS vendors are incorporating AI into everything, increasing the risk. Government agencies should consider this, he maintained.

Thacker wants the U.S. government to adopt a quicker and more intentional stance in addressing the lack of AI safety standards. Nonetheless, he commended the dedication of 16 leading AI companies to prioritize the safety and responsible implementation of advanced AI models.

“This demonstrates a growing recognition of the risks associated with AI and a readiness to address these risks. However, the real test will come in how effectively these companies adhere to their promises and the transparency of their safety protocols,” he said.

Still, his praise fell short in two key areas. He did not see any mention of consequences or aligning incentives. Both are extremely important, he added.

According to Thacker, requiring AI companies to publish safety frameworks shows accountability, which will provide insight into the quality and depth of their testing. Transparency will allow for public scrutiny.

“It may also force knowledge sharing and the development of best practices across the industry,” he observed.

Thacker also wants quicker legislative action in this space. However, he thinks that a significant movement will be challenging for the U.S. government in the near future, given how slowly U.S. officials usually move.

“A bipartisan group coming together to make these recommendations will hopefully kickstart a lot of conversations,” he said.

The Global AI Summit was a great step forward in safeguarding AI’s evolution, agreed Melissa Ruzzi, director of artificial intelligence at AppOmni. Regulations are key.

“But before we can even think about setting regulations, a lot more exploration needs to be done,” she told TechNewsWorld.

This is where cooperation among companies in the AI industry to join initiatives around AI safety voluntarily is so crucial, she added.

“Setting thresholds and objective measures is the first challenge to be explored. I don’t think we are ready to set those yet for the AI field as a whole,” said Ruzzi.

It will take more investigation and data to consider what these may be. Ruzzi added that one of the biggest challenges is for AI regulations to keep pace with technology developments without hindering them.

According to David Brauchler, principal security consultant at NCC Group, governments should consider looking into definitions of harm as a starting point in setting AI guidelines.

As AI technology becomes more commonplace, a shift may develop from classifying AI’s risk from its training computational capacity. That standard was part of the recent U.S. executive order.

Instead, the focus might shift to the potential detrimental effects AI could cause within its operational environment. It has been highlighted that current legislative efforts reflect this concern.

“For instance, an AI mechanism managing traffic signals should include significantly more safety features compared to a shopping bot, regardless of the latter’s higher training computational demands,” Brauchler explained to TechNewsWorld.

Currently, there’s no definitive guide on regulatory focal points concerning AI creation and application. It is crucial for governments to emphasize the actual effects on individuals when deploying these technologies, rather than trying to foresee the long-term trajectory of such a swiftly evolving domain, he remarked.

Should imminent threats arise from AI technologies, authorities can act on them once the risks are verified. Predictive legislating of potential issues might just be speculative, Brauchler elucidated.

“However, by focusing on developing legislation that aims to prevent harm to individuals, we bypass the need to foresee the specific evolutionary paths AI might take,” he commented.

Thacker perceives managing AI requires a delicate equilibrium between regulation and supervision. The goal is to avoid quashing innovation with overly aggressive regulations or depending excessively on corporate self-regulation.

“A regulatory approach that is minimally invasive but backed by strong oversight might be the best solution. It’s crucial that governments establish and enforce rules that encourage ethical innovation,” he explained.

Thacker also draws parallels between the current efforts to regulate AI and the historical context of nuclear armament, noting that supremacy in AI could lead to considerable economic and military leverage for some nations.

“This encourages countries to quickly advance their AI technologies. Nevertheless, international collaboration on AI safety is more achievable compared to nuclear arms, owing to the extensive network impacts facilitated by the internet and social media,” he noted.


Welcome to DediRock, your trusted partner in high-performance hosting solutions. At DediRock, we specialize in providing dedicated servers, VPS hosting, and cloud services tailored to meet the unique needs of businesses and individuals alike. Our mission is to deliver reliable, scalable, and secure hosting solutions that empower our clients to achieve their digital goals. With a commitment to exceptional customer support, cutting-edge technology, and robust infrastructure, DediRock stands out as a leader in the hosting industry. Join us and experience the difference that dedicated service and unwavering reliability can make for your online presence. Launch our website.

Share this Post
0 0 votes
Article Rating
Subscribe
Notify of
guest
0 Comments
Oldest
Newest Most Voted
Inline Feedbacks
View all comments
0
Would love your thoughts, please comment.x
()
x