top of page
  • Writer's pictureTeresa Pace

OpenAI and Microsoft unite in efforts to thwart cyberattacks associated with state actors.

While OpenAI and Microsoft successfully thwarted these incidents, they acknowledged the difficulty of preventing every cyberattack. OpenAI, the creator of the AI chatbot ChatGPT, collaborated with its principal investor, Microsoft, to thwart five cyberattacks orchestrated by various malicious entities.

In a report unveiled on Wednesday, Microsoft disclosed its surveillance of hacking groups associated with Russian military intelligence, Iran’s Revolutionary Guard, and the governments of China and North Korea, all of which have been exploring the utilization of AI large language models (LLMs) in their hacking endeavors.

LLMs leverage extensive text datasets to craft human-like responses. OpenAI disclosed that the five cyberattacks were traced back to two Chinese-linked groups: Charcoal Typhoon and Salmon Typhoon. Iran was implicated through Crimson Sandstorm, North Korea through Emerald Sleet, and Russia through Forest Blizzard. The groups attempted to leverage ChatGPT-4 for various purposes including researching company and cybersecurity tools, debugging code, generating scripts, conducting phishing campaigns, translating technical papers, evading malware detection, and studying satellite communication and radar technology, as reported by OpenAI. Upon detection, the accounts associated with these activities were promptly deactivated.

The revelation coincided with OpenAI's implementation of a blanket ban on state-backed hacking groups exploiting AI technologies. While OpenAI effectively countered these incidents, it acknowledged the ongoing challenge of preventing all malicious uses of its platforms. In response to the surge of AI-generated deepfakes and scams following ChatGPT's launch, policymakers intensified their scrutiny of generative AI developers. In June 2023, OpenAI announced a $1 million cybersecurity grant program aimed at enhancing and assessing the efficacy of AI-driven cybersecurity technologies.

Despite OpenAI's cybersecurity efforts and the implementation of safeguards to prevent ChatGPT from generating harmful or inappropriate content, hackers have managed to circumvent these measures and manipulate the chatbot to produce such material.

Over 200 entities, including OpenAI, Microsoft, Anthropic, and Google, recently collaborated with the Biden Administration to establish the AI Safety Institute and the United States AI Safety Institute Consortium (AISIC). These initiatives stem from President Joe Biden’s executive order on AI safety issued in late October 2023, which seeks to promote the responsible development of AI, combat AI-generated deepfakes, and address cybersecurity challenges.

3 views0 comments


bottom of page