Expert Take: Can AI Really Kill Good Writing? Experts Challenge the Boston Globe Alarm

Expert Take: Can AI Really Kill Good Writing? Experts Challenge the Boston Globe Alarm
Photo by Sanket Mishra on Pexels

The Boston Globe's Alarm: What the Op-Ed Says and Why It Matters

The Boston Globe editorial board recently warned that artificial intelligence is "destroying good writing." The piece argues that AI-generated text, churned out at lightning speed, erodes nuance, depth and the very craft that distinguishes a thoughtful article from a generic blurb. While the headline grabs attention, the underlying claim rests on a handful of anecdotal examples and a fear of homogenisation. But is the panic warranted for managers who must balance productivity with brand voice?

Critics point to the ease with which large language models can produce marketing copy, news briefs and even policy memos. The Globe cites a decline in editorial standards at several mid-size publications that adopted AI tools without rigorous oversight. Yet the op-ed offers little empirical data beyond a few quoted editors. This omission opens the floor to a broader debate: does AI merely accelerate existing writing trends, or does it fundamentally alter the quality of communication?

To unpack the claim, we asked a cross-section of scholars, industry leaders and educators to comment on the Globe's narrative. Their insights reveal a more nuanced picture - one where AI can both dilute and enhance writing, depending on how it is deployed. The following sections synthesize those viewpoints, offering a practical lens for non-technical managers.


Key takeaway: The Boston Globe’s alarm reflects a legitimate concern about oversight, but it is not a universal verdict on AI’s impact on writing quality.

Academic and Industry Voices: Is Speed Really the Enemy of Substance?

Professor Margaret E. Roberts of the University of Cambridge’s Department of English argues that speed has always been a double-edged sword in publishing. "The printing press accelerated dissemination without destroying the art of prose," she notes, drawing a parallel to the digital age. In a 2023 interview with The Guardian, Roberts emphasised that the *medium* matters less than the *editorial intent*.

On the industry side, Neil Patel, co-founder of a global content-marketing agency, contends that AI can free senior writers from repetitive tasks, allowing them to focus on strategy and storytelling. "When a junior copywriter spends an hour drafting a product description, AI can cut that to five minutes," Patel explains. He warns, however, that without clear guidelines, the resulting output can become a sea of bland, keyword-stuffed text.

Technology analyst Jane McGonigal of the MIT Media Lab adds a third dimension: the role of feedback loops. "AI learns from the data it ingests. If we feed it low-quality examples, it will reproduce them," she says. McGonigal recommends a hybrid workflow where AI drafts are reviewed by human editors before publication, preserving the creative spark while leveraging efficiency.

Collectively, these experts suggest that speed is not inherently destructive; rather, the loss of substance occurs when speed replaces, not supplements, critical thinking. For managers, the challenge is to design processes that capture the best of both worlds.


The Economics of AI Writing: Cost Savings vs Hidden Expenses

From a financial perspective, AI promises dramatic cost reductions. A 2022 report by the McKinsey Global Institute estimated that AI-enabled automation could shave up to 30% off content-production budgets for large enterprises. Yet the same study warned of hidden expenses: training, licensing, and the need for ongoing quality assurance.

Chief Financial Officer Arun Patel of a multinational consumer-goods firm shared his company’s experience. "We saved roughly $2.5 million in the first year after deploying AI for product-description generation," Patel disclosed in a recent CFO Roundtable. "But we also incurred $500,000 in editorial oversight and $300,000 in AI-model fine-tuning. The net gain was still positive, but it required a dedicated governance team."

Conversely, Linda García, senior analyst at the European Investment Bank, highlights the risk of brand dilution. "When AI churns out content at scale, the subtle brand voice can erode, leading to longer-term customer disengagement," García warned in a 2023 policy brief. She recommends allocating a portion of the cost savings to brand-voice training and periodic audits.

The economic equation, therefore, is not a simple trade-off between speed and cost. Managers must factor in the long-term value of brand integrity and the operational overhead of maintaining editorial standards.


"Students at Berklee College of Music pay up to $85,000 to attend. Some say the school’s AI classes are a waste of money." - Boston Globe, May 2024

Pedagogical Perspectives: AI Classes in Higher Education and Their ROI

The Boston Globe also highlighted a controversial trend in higher education: costly AI curricula. Berklee College of Music, for instance, charges up to $85,000 for a degree that includes AI-focused courses. Critics argue that the curriculum does not deliver proportional value, especially when free online resources exist.

Dr. Samuel Lee, professor of Media Studies at New York University, examined this claim in a 2024 paper published in *Journal of Higher Education Policy*. Lee found that while AI courses improve technical fluency, they often neglect critical thinking about ethics and authorship. "Students can learn how to prompt a model, but few are taught to evaluate the output for bias or originality," he wrote.

On the other hand, Rebecca Torres, dean of the School of Communication at the University of Sydney, defends the investment. "Our AI-writing program integrates journalism ethics, data literacy and hands-on projects with industry partners. Graduates report a 20% higher employability rate," Torres reported in a 2023 university press release.

For managers hiring recent graduates, the implication is clear: a pricey AI degree does not guarantee superior writing. What matters is the curriculum’s emphasis on critical evaluation, not just tool proficiency. When evaluating talent, look for evidence of editorial judgment alongside technical skill.


Cultural and Ethical Angles: Freedom of Expression and the AI Debate

The controversy over AI and writing extends beyond economics into cultural territory. The Globe’s op-ed implicitly raises concerns about a homogenised public discourse, where algorithmic bias could silence minority voices. Dr. Aisha Rahman, senior fellow at the Center for Digital Democracy, warns that AI models trained on predominantly Western corpora may marginalise non-English idioms and cultural references.

In a 2023 testimony before the U.S. Senate Committee on Commerce, Rahman cited a case where an AI-generated news summary omitted crucial context about a protest in Nairobi, leading to misinterpretation abroad. "When AI decides what is newsworthy, it shapes the global narrative," she asserted.

Conversely, civil-rights lawyer Markus Feldman argues that AI can democratise content creation. "Low-resource journalists can now produce reports that would otherwise require a full newsroom," Feldman noted in a 2024 interview with *Al Jazeera*. He cautions, however, that the tools must be paired with transparent attribution and robust fact-checking.

The cultural debate underscores a managerial responsibility: ensure that AI-generated content respects diversity, includes bias-mitigation steps, and maintains transparency about its origins.


Practical Takeaways for Non-Technical Managers: Balancing Efficiency and Quality

After hearing from academics, CFOs, educators and ethicists, what concrete steps can a manager take? First, treat AI as a *collaborator*, not a replacement. Implement a review pipeline where senior writers audit a sample of AI-generated pieces each week. Second, allocate a portion of the cost savings to continuous training on bias detection and brand-voice preservation.

Third, establish clear metrics. Track not only output volume and cost, but also engagement metrics such as time-on-page, bounce rate and sentiment analysis. If AI content leads to a measurable dip in audience trust, adjust the workflow promptly.

Fourth, consider a phased rollout. Pilot AI tools in low-risk departments - like internal newsletters - before scaling to customer-facing channels. Collect feedback, refine prompts, and document best practices.

Finally, maintain transparency with stakeholders. Disclose when content is AI-assisted, especially in regulated industries like finance or healthcare. This builds credibility and mitigates legal risk.

In short, the Boston Globe’s alarm is a useful cautionary tale, but it need not become a self-fulfilling prophecy. By embedding rigorous oversight, investing in talent development and measuring impact, managers can harness AI’s speed without sacrificing the soul of good writing.