Grammarly Backtracks on AI Feature That Mimicked User Writing Styles The popular writing assistance platform Grammarly has abruptly disabled a new AI feature following intense backlash from users and the wider writing community. The feature, called “GrammarlyGO,” included an option that allowed it to analyze a user’s past writing and then generate new text in that person’s unique style. While promoted as a tool for boosting productivity, the capability sparked immediate controversy. Critics argued it amounted to creating a digital clone of a person’s voice without their explicit, informed consent. For writers, journalists, and content creators, their distinctive style is a core part of their professional identity and personal brand. The idea that an AI could replicate and deploy it with a click was seen as a profound violation. The backlash was swift and severe across social media and tech forums. Users expressed discomfort and anger, labeling the feature as invasive and unethical. The core complaint was about permission and ownership. Grammarly had essentially created a mechanism to impersonate users based on the data they had uploaded to the platform for a different purpose entirely grammar and clarity checking. The company initially defended the feature but ultimately pulled it offline within days. In a statement, Grammarly acknowledged the misstep, saying, “We hear the feedback and recognize we fell short on this.” They confirmed the style-mimicking component has been removed from the GrammarlyGO feature while the rest of the AI suite remains active. This incident serves as a critical case study for the crypto and web3 community, where themes of digital ownership, consent, and authenticity are paramount. It highlights the growing tension between innovative AI applications and individual agency over one’s digital footprint. In a web3 context, such a feature would likely require a clear, on-chain permissions framework, potentially governed by the user through a wallet or a smart contract agreement, rather than being a default setting buried in terms of service. The Grammarly situation underscores a fundamental question as AI becomes more personal: who controls the digital twin? For an industry built on self-custody and verifiable ownership, the answer seems clear. Users must have sovereign control over how their data, including their creative output and personal style, is used to train or power AI models. This episode will likely fuel further development in decentralized identity solutions and AI training data markets where consent is negotiated, not assumed. Grammarly’s retreat shows that even in mainstream web2 applications, users are drawing a hard line against opaque data practices. The expectation is shifting toward explicit opt-in models, especially for sensitive features like personality replication. As AI continues to evolve, the companies that succeed will be those that build trust through transparency and user control, principles that are native to the blockchain ethos. The tech may be advanced, but the lesson is simple: do not impersonate your users without their clear, revocable blessing.

