25.8 C
Lagos
Friday, April 3, 2026

Google Says Hackers Attempted to Clone Gemini Using 100,000 Prompts

Share this:

Google has disclosed that its flagship artificial intelligence model, Gemini, was targeted in a large-scale attempt to replicate its capabilities through a technique known as model extraction.

According to the company, attackers submitted more than 100,000 carefully structured prompts in an effort to reverse-engineer how the AI system reasons and generates responses. The incident has raised fresh concerns about the security of advanced AI systems and the growing threat of intellectual property theft in the sector.

Suspicious Activity Detected

Google’s security teams identified unusual usage patterns involving repeated and highly structured queries sent to Gemini. Unlike typical user prompts, the queries were strategically designed to probe the model’s reasoning processes and decision-making patterns.

READ ALSO:  Military officers storm IKEDC headquarters over power disconnection

By collecting and analysing a large number of responses, attackers can potentially build a dataset sufficient to train a separate AI system that mimics the behaviour of the original model.

Google said it detected and blocked the activity before significant damage occurred. The company also confirmed that there was no breach of its core systems and no compromise of user data.

Understanding Model Extraction

Model extraction — sometimes referred to as “distillation” — does not involve hacking into servers or stealing source code. Instead, it relies on interacting with an AI system through legitimate interfaces such as APIs or chat platforms.

READ ALSO:  Presidency Launches Attack On Jonathan For Daring To Join 2027 Race

In such attacks, individuals submit thousands of targeted prompts, gather the responses, and use them to train a secondary “student” model designed to approximate the original system’s capabilities.

As AI models become more valuable and expensive to develop, the risk of such attempts increases. Training large language models like Gemini requires billions of dollars in computing infrastructure, research, and specialised talent. Successfully replicating similar behaviour at lower cost could erode a company’s competitive advantage.

Broader Industry Implications

The incident highlights a shifting security landscape in artificial intelligence. Unlike traditional cyberattacks that seek unauthorised access, model extraction attempts operate within normal usage channels, making detection more complex.

READ ALSO:  Guinea junta dissolves interim government

While Google did not identify the source of the attempt, reports suggest that such efforts are often commercially motivated rather than state-sponsored.

The company said it is strengthening its monitoring systems to better identify behaviour indicative of model extraction, including unusual prompt patterns and abnormal query volumes.

As competition intensifies in the AI race, companies are increasingly focusing not only on building more advanced systems but also on protecting them from replication.

Google’s disclosure underscores a growing reality: in the AI era, innovation and cybersecurity are becoming inseparable.

Share this:
RELATED NEWS
- Advertisment -
- Advertisment -spot_img

Latest NEWS

Trending News