The Digital Ghostwriters That Sparked A Revolt
Imagine opening a document to find a "ghost" of Stephen King or Carl Sagan critiquing your grammar.
This was the reality for users of Grammarly’s Expert Review, a feature designed to offer feedback through the AI-simulated personas of famous scholars, journalists, and authors.
However, the prestige of the roster was quickly overshadowed by an uncomfortable truth: the experts being mimicked had no idea they were part of the product.
By Wednesday, 11 March 2026, the backlash reached a breaking point, forcing parent company Superhuman to pull the plug on the controversial tool.
The feature, which arrived last summer as Grammarly sought to evolve into a platform for AI productivity agents, allowed users to receive feedback modeled on specific professional styles.
While the company claimed the AI relied on "publicly available information," critics argued the tool was essentially strip-mining the identities and hard-earned mannerisms of writers without consent.
The tension escalated when it was discovered that even deceased figures were being "consulted" by the algorithm, leading to accusations of identity theft and digital necromancy.
Why Did Grammarly Use Names Without Consent?
The controversy centered on a fundamental lack of transparency.
Rather than partnering with the experts, Grammarly implemented an "opt-out" policy, meaning writers were included by default and had to manually request their removal.
This approach backfired spectacularly among the writing community.
James Bareham, former creative director at The Verge, voiced the frustration of many on Bluesky, stating,
“I'm no lawyer, but I think 'We're going to keep stealing your stuff until you tell us you don't want us to steal your stuff' isn't quite the defense Grammarly thinks it is—at least not in the court of public opinion.”
Source: Bluesky
The technical execution of the feature also faced scrutiny.
Reports surfaced that some AI suggestions were linked to spam sites or used outdated job titles for the journalists they were impersonating.
Even more bizarrely, the tool would offer "expert" advice on meaningless placeholder text like lorem ipsum.
Author Benjamin Dreyer mocked the company’s generous opt-out offer, writing,
“But in the meantime, if I can cause some corporate shyster a few moments' worth of agita, I will feel as though my hard work ain't been in vain for nothin’.”
Source: Bluesky
How Many Writers Were Turned Into AI Against Their Will?
The list of "experts" read like a Who’s Who of modern media and science, including names like Neil deGrasse Tyson, Kara Swisher, and Mark Gurman.
For many, the discovery was a shock.
Tech journalist Casey Newton, whose own digital likeness was found dispensing advice, titled a scathing blog post, “Grammarly turned me into an AI editor against my will and I hate it.”
He noted the irony of the situation, writing,
“I’ve long assumed that before too long, AI might take my job. I just assumed that someone would tell me when it happened.”
The backlash has already moved into the legal arena.
Investigative journalist Julia Angwin launched a lawsuit, expressing her distress at discovering a company was "selling an imposter version" of her expertise.
Faced with mounting legal pressure and a PR disaster, Superhuman CEO Shishir Mehrotra issued an apology on LinkedIn, admitting that the company "missed the mark."
He stated,
“Over the past week, we received valid critical feedback from experts who are concerned that the agent misrepresented their voices.”
Can AI Productivity Tools Survive Without Exploiting Identity?
Grammarly, which serves roughly 40 million users and 500,000 organisations, is currently fighting to maintain its relevance in a market dominated by ChatGPT and Claude.
The acquisition of Coda Project Inc. in 2024 was intended to pivot the company toward more sophisticated AI agents, but the Expert Review failure suggests a disconnect between technological ambition and ethical boundaries.
Ailian Gan, Grammarly’s director of product management for agents, told Decrypt that the feature is being redesigned to ensure experts have "real control" over their representation.
While the feature is disabled for now, the incident has left a bitter taste in the mouths of creators.
Kara Swisher’s response was perhaps the most direct, as she warned,
“You rapacious information and identity thieves better get ready for me to go full McConaughey on you. Also, you suck.”
The company’s $12-a-month Pro subscribers may see a redesigned version in the future, but the bridge between AI developers and the human creators they emulate has been significantly damaged.
The Ethical Boundary Between Inspiration And Impersonation
Coinlive suggests that the current AI trajectory requires a shift from parasitic data harvesting toward a model of symbiotic partnership.
When AI begins to output words under a specific person’s name that were never actually written or spoken by them, it ceases to be a tool and becomes a vehicle for deception.
There is a thin, dangerous line between "style transfer" and outright forgery.
If the industry continues to value "publicly available data" over the human right to identity, it risks turning the internet into a hall of mirrors where no voice can be trusted.
AI should function as a supportive assistant that empowers the user, not as an imposter that misleads the public by wearing a stolen mask of expertise.