31.3 C
Lagos
Monday, March 30, 2026

Dubious AI Detectors Fuel ‘Pay-to-Humanise’ Scam, Experts Warn

Share this:

(DDM) — Concerns are mounting over the growing use of unreliable artificial intelligence detection tools, as experts warn that some platforms are being used to drive a deceptive “pay-to-humanise” scam targeting unsuspecting users.

The controversy centers on so-called AI detectors that falsely flag genuine human-written content as AI-generated, creating panic among students, professionals, and content creators who rely on originality for academic, business, and publishing purposes.

According to digital analysts, these questionable tools often exaggerate or fabricate detection results, labeling authentic work as suspicious in order to push users toward paid services that promise to “humanise” or rewrite the content.

Cybersecurity observers say the pattern suggests a deliberate strategy, where fear is used as a marketing tactic to generate revenue from individuals desperate to avoid penalties associated with AI-generated content.

READ ALSO:  Kuda Bank Sacks Hundreds of Employees through Video Call

“These platforms exploit uncertainty,” one analyst noted, explaining that many users do not fully understand how AI detection works, making them vulnerable to manipulation and misleading claims.

The issue has gained traction amid the rising use of AI tools like ChatGPT and Grammarly, which have transformed how people create and edit content. As institutions increasingly attempt to regulate AI usage, demand for detection tools has surged—creating opportunities for abuse.

Experts warn that there is currently no universally reliable method for accurately distinguishing between human-written and AI-generated text, especially as AI models become more sophisticated and capable of mimicking human writing styles.

READ ALSO:  Spain Probes X, Meta, TikTok Over AI-Generated Child Abuse Content

As a result, false positives are common, with legitimate work being wrongly flagged, potentially exposing users to academic penalties, reputational damage, or unnecessary financial costs.

Consumer protection advocates are calling for stronger regulation and transparency in the AI detection space, urging authorities to scrutinize platforms that make unverifiable claims about their accuracy rates.

They also advise users to approach such tools with caution, emphasizing that no single detector should be treated as definitive proof of authorship. Instead, they recommend combining multiple methods, including human review and contextual analysis.

READ ALSO:  Legend Internet to Merge with Spectranet in Broadband Consolidation Push

The emergence of the “pay-to-humanise” model highlights broader concerns about the commercialization of AI-related fears, as opportunistic actors capitalize on confusion surrounding new technologies.

Observers say the trend underscores the urgent need for digital literacy, as individuals and institutions navigate the evolving landscape of artificial intelligence and its implications for content creation and verification.

As awareness grows, experts stress that informed users are the best defense against such scams, urging the public to question suspicious claims and avoid platforms that pressure them into paying for unnecessary services.

Share this:
RELATED NEWS
- Advertisment -
- Advertisment -spot_img

Latest NEWS

Trending News