Close Menu
  • Home
  • World
  • Politics
  • Business
  • Technology
  • Science
  • Health
Facebook X (Twitter) Instagram
Facebook X (Twitter) Instagram
embassyreport
Subscribe
  • Home
  • World
  • Politics
  • Business
  • Technology
  • Science
  • Health
embassyreport
Home » Writing Tool Removes AI Personas After Legal Challenge from Authors
Technology

Writing Tool Removes AI Personas After Legal Challenge from Authors

adminBy adminMarch 12, 2026No Comments8 Mins Read3 Views
Facebook Twitter Pinterest LinkedIn Tumblr Email
Share
Facebook Twitter LinkedIn Pinterest WhatsApp Email

Grammarly has disabled an AI feature that mimicked the writing styles of prominent authors and scientists without their consent, following a significant legal challenge from the writers whose identities were used. The Expert Review function, which offered writing feedback “inspired by” the personas of figures including Stephen King and Carl Sagan, was taken down this week by Superhuman, the tech firm that operates Grammarly. The move came after a multi-million dollar lawsuit was filed in the Southern District of New York by investigative journalist Julia Angwin and other writers who discovered their names and professional reputations being marketed as commercial AI personas. Superhuman’s chief executive acknowledged the tool had “misrepresented” the voices of experts|expert voices, apologizing for the controversial feature.

The Element That Ignited Outrage

The Expert Review capability constituted a major shift from Grammarly’s traditional editing support approach. Rather than providing standard feedback, the tool enabled users to receive editing suggestions “inspired by” the characteristic voices of celebrated writers and academics. Users could pick from personas including acclaimed novelist Stephen King and distinguished scientist Carl Sagan, among hundreds of other public figures. The feature claimed to offer customized writing advice filtered through the lens of these distinguished experts, ostensibly enabling users refine their work by studying the best in their respective fields.

What Grammarly promoted as an cutting-edge educational tool quickly revealed itself as a concerning unauthorized use of identity and intellectual property. The company had failed to obtain permission from any of the writers whose personas were being replicated and commercialized. Reporter Julia Angwin, who led the case in the collective legal action, voiced concern at discovering her professional identity being marketed as a product feature. She described the situation as fundamentally different from traditional deepfakes, stressing that editorial skills constitutes her income and that she had never imagined her professional skills could be stolen and packaged this way.

  • AI personas emulated hundreds of writers without consent or compensation
  • Feature offered feedback inspired by renowned writers and researchers
  • Users could pick distinct professional personas for revision feedback
  • Tool was incorporated into Grammarly’s premium subscription services

Court Action and Sector Response

The legal dispute against Superhuman and Grammarly constitutes a significant moment in the wider conversation over ethics in AI and copyright protections. Led by journalist investigator Julia Angwin, the class-action legal action filed in the Southern District of New York alleges that the company unlawfully misappropriated the identities of hundreds of writers to produce earnings from its paid subscription service. The filing maintains that employing names and professional standing for business use without clear permission violates existing legal protections defending individuals from unauthorized exploitation of their name and likeness.

The reaction to the lawsuit has been rapid and significant. Within 24 hours of filing, Angwin’s legal team reported hearing from over 40 potential plaintiffs eager to join the action, illustrating broad apprehension among affected writers. The case pursues damages surpassing $5 million, though legal experts indicate the real amount could be substantially greater once the court determines compensation based on the company’s revenue generated by the disputed feature. Superhuman’s quick move to deactivate the Expert Review function indicates the company understood the reputational and legal risks created by continuing the feature.

The Court Case Details

The lawsuit specifically contends that Grammarly and Superhuman infringed upon essential safeguards of identity protection by attributing editorial advice to writers who never provided such direction. The legal filing underscores that the firm monetized these identities through its paid membership structure, earning income through the unlicensed use of hundreds of individuals’ names and career standing. Attorneys contend this represents a “brazen violation of the law,” pointing to existing legal precedents defending individuals from commercial misappropriation of their identity without approval.

Julia Angwin’s individual dissatisfaction with the feature surpassed the unlawful infringements to the quality of the AI’s output. She described the proposed edits assigned to her as a “slopperganger”—a term denoting low-quality AI-generated content—observing that the edits were rendering sentences worse rather than improving them. This added dimension to the case emphasizes not only the unlawful conduct but also the reputational damage of having one’s identity associated with substandard professional work, intensifying the harm of unauthorized identity use.

  • Damages sought exceed $5 million with actual figure based on company earnings
  • Over 40 additional plaintiffs reached out to law firm in the initial day
  • Claims unlawful commercial use of identities lacking consent or compensation

Reliability Problems and Trust Deficiencies

Beyond the legal violations, the Expert Review function raised serious questions about the dependability and precision of AI-generated editorial advice. Users relying on suggestions credited to recognized authors and scholars had no means to determine whether they were receiving genuine guidance or algorithmically-generated approximations of expert knowledge. This erosion of trust extends beyond individual plaintiffs to the wider writing sector, where readers and students might have reasonably assumed they were getting guidance from established authorities. The removal of the feature highlights a significant disconnect between what AI can technically accomplish and what it ought to be allowed to do from an ethical standpoint.

The damage to reputation suffered by writers whose identities were used turned out to be particularly harmful because it tied their names directly to low-quality output. Angwin’s case illustrated the issue—her name and reputation was being promoted as a high-quality offering while concurrently offering inferior editorial guidance. This mix of unauthorized application and poor quality created a double injury: lack of control over her name combined with connection to inferior work that went against her professional expectations. For writers whose reputation depends on the caliber of their output, such false attribution creates an existential risk to their credibility and market value.

The Issue with AI Imitation

The central weakness in Grammarly’s strategy lay in trying to reproduce the subtle discernment and mastery of seasoned professionals through algorithmic processes. Skilled revision necessitates context-specific knowledge, attention to style, and years of accumulated experience—elements that cannot truly be reproduced by studying written work and generating responses in a similar voice. Angwin’s point that the automated suggestions made sentences unnecessarily intricate rather than improving them revealed the fundamental emptiness of the imitation. The technology could reproduce superficial style elements but lacked the deeper comprehension necessary to provide truly useful editing advice, ultimately undermining both the credibility of the personas and the usefulness of the platform itself.

Company Response and Path Forward

Superhuman’s chief executive Shishir Mehrotra acknowledged the misstep publicly, releasing an apology on LinkedIn in which he conceded that the Expert Review function had “misrepresented” the voices of the impersonated experts. The company’s swift decision to turn off the feature this week suggests an attempt to mitigate further legal and reputational damage. However, the removal came only following the lawsuit was filed and significant public backlash emerged, raising questions about whether the company would have taken action without external pressure. Mehrotra’s statement, while apologetic in tone, did not tackle the broader question of how such a feature was approved and deployed in the first place, nor did it detail specific steps to prevent similar incidents in the future.

The road ahead for Grammarly remains uncertain as the litigation proceeds. Beyond the pressing legal issue, the company faces the challenge of rebuilding trust with both writers and users who may now challenge the moral principles governing its AI creation. The removal of the Expert Review function demonstrates a reactive versus proactive stance, implying the company is responding to legal pressure rather than exhibiting authentic dedication to ethical AI practices. Moving forward, Grammarly will probably need to establish more stringent consent protocols and monitoring systems for any tools using the use of real people’s names or likenesses. The company’s management of this controversy may establish a precedent for how other AI firms address the use of celebrities in their generation systems.

Timeline Action
August 2025 Grammarly integrates generative-AI tools, including Expert Review function
Recent weeks Writers and experts discover their personas being used without consent
This week (pre-lawsuit) Class-action lawsuit filed by Julia Angwin in Southern District of New York
This week (post-lawsuit) Superhuman disables Expert Review feature; CEO issues public apology

The speed at which Grammarly deactivated the feature indicates the company recognized the legal and reputational stakes involved. However, the lack of proactive measures before the lawsuit demonstrates that internal review processes missed the violations of ethical standards. As the legal proceedings continue, the company might experience increased oversight regarding how many writers were affected and whether compensation will be offered above what the lawsuit requires. The case is expected to shape how other AI companies address the use of real identities in their products going forward.

Share. Facebook Twitter Pinterest LinkedIn Tumblr Email
Previous ArticleGlobal Oil Markets Plunge Into Unprecedented Volatility Amid Middle East Tensions
Next Article AI Data Centres Risk Blocking Housing Development Across Britain
admin
  • Website

Related Posts

UK Adults Retreat from Public Social Media Posting, Ofcom Survey Reveals

April 3, 2026

SpaceX poised for historic trillion-pound stock market debut

April 2, 2026

Australia’s Social Media Regulator Demands Tougher Enforcement from Tech Giants

March 31, 2026
Leave A Reply Cancel Reply

Disclaimer

The information provided on this website is for general informational purposes only. All content is published in good faith and is not intended as professional advice. We make no warranties about the completeness, reliability, or accuracy of this information.

Any action you take based on the information found on this website is strictly at your own risk. We are not liable for any losses or damages in connection with the use of our website.

Advertisements
casinos not on GamStop
casino not on GamStop
UK casinos not on GamStop
games not on GamStop
casino not on GamStop
online casino canada
online casino
online casinos
online casinos
online casino
online casino
canadian online casinos
new online casinos
online casino
online casinos
betting sites not on GamStop
sites not on GamStop
non GamStop betting sites
betting sites not on GamStop
UK casinos not on GamStop
slots not on GamStop
casino not on GamStop
non GamStop casinos
non GamStop casinos
casinos not on GamStop
non GamStop sites
casinos not on GamStop
gambling sites not on GamStop
gambling sites not on GamStop
non GamStop casinos UK
best non GamStop casinos
casinos not on GamStop
non GamStop sites
Contact Us

We'd love to hear from you! Reach out to our editorial team for tips, corrections, or partnership inquiries.

Telegram: linkzaurus

© 2026 ThemeSphere. Designed by ThemeSphere.

Type above and press Enter to search. Press Esc to cancel.