Contact Info

Atlas Cloud LLC 600 Cleveland Street Suite 348 Clearwater, FL 33755 USA

support@dedirock.com

Client Area
Recommended Services
Supported Scripts
WordPress
Hubspot
Joomla
Drupal
Wix
Shopify
Magento
Typeo3

OpenAI announced on Wednesday that it has successfully disrupted over 20 operations and misleading networks globally that attempted to exploit its platform for harmful activities since the beginning of the year.

This malicious activity ranged from debugging malware to crafting articles for various websites, generating social media biographies, and fabricating AI-generated profile images for fraudulent accounts on X.

“While malicious actors are continuously adapting and testing our models, we have not found substantial evidence indicating they have made significant advances in their capacity to create notably new malware or establish widespread audiences,” stated the AI company.

Additionally, OpenAI reported that it countered efforts generating social media content associated with elections in the United States, Rwanda, and to a lesser extent, India and the European Union. None of these networks succeeded in gaining viral traction or maintaining active audiences.

This includes activities carried out by an Israeli commercial entity known as STOIC (also referred to as Zero Zeno), which was involved in generating social media commentary about Indian elections, as previously highlighted by Meta and OpenAI earlier this May.

OpenAI outlined several cyber operations, including –

  • SweetSpecter, a suspected adversary from China, which took advantage of OpenAI’s services for large language model-informed reconnaissance, vulnerability exploration, scripting assistance, anomaly detection evasion, and development efforts. It has also been reported conducting unsuccessful spear-phishing attacks targeting OpenAI staff to deliver the SugarGh0st RAT.
  • Cyber Av3ngers, a group connected to the Iranian Islamic Revolutionary Guard Corps (IRGC), utilized its models for research into programmable logic controllers.
  • Storm-0817, an Iranian threat actor, utilized their models to debug Android malware capable of extracting sensitive data, tools to scrape Instagram profiles using Selenium, and translate LinkedIn profiles into Persian.

Furthermore, the company stated that it took measures to disable several clusters, including an influence operation named A2Z and Stop News, involving accounts generating English and French language content intended for publication on various websites and social media outlets.

“[Stop News] was notably prolific in deploying imagery,” remarked researchers Ben Nimmo and Michael Flossman. “Many of its web articles and tweets were paired with images produced using DALL·E, typically in a cartoonish style featuring vibrant color schemes or dramatic tones to capture attention.”

Two additional networks identified by OpenAI, Bet Bot and Corrupt Comment, have been discovered utilizing their API to generate user interactions on X and fabricate comments that were subsequently posted on the platform.

This announcement follows nearly two months after OpenAI took action against a series of accounts linked to an Iranian covert influence operation named Storm-2035, which used ChatGPT to create content focusing on various subjects, including the upcoming U.S. presidential election.

“Threat actors typically leveraged our models to carry out tasks in an intermediary phase of activity—after acquiring fundamental tools such as internet access, email accounts, and social media profiles, but before launching ‘finished’ products, like social media posts or deploying malware via a myriad of distribution channels,” Nimmo and Flossman noted.

According to a recent report from cybersecurity firm Sophos, generative AI could be misused for spreading targeted misinformation through microtargeted emails.

This involves employing AI models to create political campaign websites, generating AI-crafted personas across the political spectrum, and formulating email messages that target individuals based on specific campaign topics, thereby facilitating a new level of automation that can propagate misinformation at scale.

“This allows users to create anything from innocuous campaign material to deliberate misinformation and malevolent threats with minimal adjustments,” noted researchers Ben Gelman and Adarsh Kyadige.

“It’s possible to link any genuine political movement or candidate with supporting any policy, even if there’s no agreement. This kind of intentional misinformation can lead individuals to support a candidate they do not actually favor or cause them to resist one they previously endorsed.”

Found this article interesting? Follow us on Twitter and LinkedIn to read more exclusive content we post.


Welcome to DediRock, your trusted partner in high-performance hosting solutions. At DediRock, we specialize in providing dedicated servers, VPS hosting, and cloud services tailored to meet the unique needs of businesses and individuals alike. Our mission is to deliver reliable, scalable, and secure hosting solutions that empower our clients to achieve their digital goals. With a commitment to exceptional customer support, cutting-edge technology, and robust infrastructure, DediRock stands out as a leader in the hosting industry. Join us and experience the difference that dedicated service and unwavering reliability can make for your online presence. Launch our website.

Share this Post
0 0 votes
Article Rating
Subscribe
Notify of
guest
0 Comments
Oldest
Newest Most Voted
Inline Feedbacks
View all comments
0
Would love your thoughts, please comment.x
()
x