Cybersecurity study reveals data security lapses among AI leaders

An agent of the operational center of the French National Cybersecurity Agency (ANSSI) checks datas on a computer in Paris.
The study found that all 10 analyzed companies had SSL/TLS configuration vulnerabilities, and five of the 10 had recorded data breaches. AFP via Getty Images

As companies continue to experiment with and adopt generative artificial intelligence technology, data security should be an essential consideration in the tool or partner selection process. This is particularly true in the sports industry, where internal processes and public-facing applications are ripe for streamlining through the use of AI.

Underscoring this point, cybersecurity research firm Cybernews released results of a recent study on the data security of 10 leading large language model providers, including OpenAI, Anthropic, Perplexity and DeepSeek. Among learnings, the study found that all 10 analyzed companies had SSL/TLS configuration vulnerabilities, and five of the 10 had recorded data breaches.

OpenAI racked up the most data breaches among analyzed companies, with 1,140 such incidents recorded. OpenAI also scored the second-lowest of the group in Cybernews’ Business Digital Index with a D -- or, “high risk” -- grade (only Inflection AI scored lower). AI21 Labs, Perplexity, Anthropic, Mistral AI and Cohere, meanwhile, all scored As (low-risk), while DeepSeek logged a C (moderate risk).

The Cybernews study flagged practices on the LLM providers’ parts like reusing passwords, sending data prompts using personal accounts, and cloud-hosting systems as potential risk amplifiers when it comes to cyberattacks.

“Without strong cybersecurity practices,” the study reads, “every LLM tool integrated into workflows can become a new entry point for attackers.”



Sponsored content