The need for a keen editorial eye has thus far endured the ever-advancing capacities of Microsoft Word’s spelling and grammar aids and its functional cousins, such as Grammarly. However, in the last year, the emergence of ChatGPT has raised serious questions about how it might co-exist with—or replace—editors. After all, ChatGPT not only offers help with grammar, but with structure, flow, and tone, according to Sabrina Ortiz of ZDNET.
Reassuringly, a theme that emerged from presenters at the 2023 American Copy Editors Society Conference was that artificial intelligence (AI)-based editorial tools such as ChatGPT were still very much in need of adult supervision. Their sentiments echoed those of MIT economics professor David Autor, who argues in a lengthy examination of AI’s effects on work and the middle class that “as AI’s facility in expert judgment becomes more reliable, incisive and accessible in the years ahead…its primary role will be to advise, coach and alert decision-makers as they apply expert judgment.” Human editors are still best suited to assess the overall tone and spirit of a piece of writing. It is difficult, at least at present, for AI to mimic our ability to predict how something we write might be received by different audiences. It cannot adjust for the need for tact or assertiveness the way we can. However, conference presenters encouraged editors to keep an open mind, saying that for less nuanced functions such as spell checking, or providing options for how individual sentences might be rewritten, AI could help us save time. AI could handle our “light work”—so editors have more time to engage with higher-level concerns that AI might treat, well, robotically.
I thought of all this while reading a recent article about how teachers are embracing ChatGPT-enhanced assessments of writing assignments. According to the article, well-known textbook company Houghton Mifflin Harcourt (HMH) has purchased Writable, a company whose software “enable(s) teachers to incorporate AI-suggested feedback and scores into their instruction.” Teachers will run student assignments through Writable, which uses ChatGPT to “offer comments and observations to the teacher.” The teacher will review and tweak that feedback—presumably using their knowledge of the areas in which students need the most help—before sending it to students. HMH calls this concept “human in the loop”—wherein a human intermediary (a teacher or, for our purposes, an editor) makes decisions about the areas in which AI is most helpful to a student (or author). Teachers may choose to deemphasize or remove certain feedback to ensure a student isn’t overwhelmed, or to limit the feedback to a particular point of emphasis or lesson plan. Critically, to help alleviate privacy concerns, Writable’s software (purportedly) ensures “no personally identifying details are submitted to the AI program.”
McGraw Hill, another familiar name in textbooks, has announced that they are developing their own version of Writable. McGraw Hill and HMH believe that these products will enable teachers to leave much of the work of the old red pen to technology, perhaps enabling them to concentrate on concepts which are more difficult to teach, like structure, flow, and tone. It’s easy enough to see how Writable might benefit a high school English teacher, but also conceivable that teachers and professors outside of English or literature classes might lean on this technology to an even greater extent. They might feel that these tools allow them to spend more time teaching the complex concepts in which they have the most expertise, especially at the university level. There, these tools can be augmented by the presence of “old-school” campus writing centers, combining to offer students what might be considered ample, high-quality support.
In reading the article—particularly as an English major, a former writing tutor, and someone who imagined once upon a time that I might be an academic—I found myself automatically resistant to the automating and depersonalization of the response process. I was, however, relieved that that these programs (at least for now) don’t eliminate the human factor. I assume that students will still be drilled on essentials like outlining, topic sentences, and how to flesh out a paragraph to make an argument before they turn in papers. I also understand that the technology is supposed to give teachers more time (and energy) to teach those concepts. Nevertheless, I share some of the concerns of John Warner, a former writing instructor at multiple universities and author of three books including Why They Can’t Write: Killing the Five-Paragraph Essay and Other Necessities (Johns Hopkins University Press, 2020). In an X thread, Warner lamented that the HMH and McGraw Hill’s products constituted “…production line, inhuman student processing, not education” and that “you cannot teach writing via an algorithm that cannot read or write.” He went on to acknowledge that “teaching is a very difficult, sometimes impossible job where teachers are not given the time, support, and resources to do the work” and that in that context, a “labor-saving device” made sense. However, he concluded that “using Harcourt’s gen[erative] AI essay program is truly giving up on teaching writing.”
Andrew Piper, a professor at McGill University in Montreal, faulted Warner for not addressing how AI might be integrated into what Warner himself termed an “AI-mediated space.” Piper, who teaches courses such as “Literature and AI” and “Introduction to Literary Data Mining,” heads .txtlab, “a laboratory for cultural analytics.” It provides access for researchers to large data sets consisting of text from movie dialogue to every genre of writing, some of which dates back to 1770—with a goal of “[using] AI and natural language processing to better understand human storytelling.” Piper wrote that “it’s too easy to say this isn’t…the best possible solution…expense is higher ed’s biggest problem,” then clarified this statement, saying “…we’d all love personal human tutors. But the real challenge is how do you make quality education more affordable to more people. If that’s the problem then your solution is actually a problem.” (Warner responded that he’d written about the affordability and access issues, providing links to his books.)
Piper and Warner seem to have slightly different views of what “the problem” is—I came away feeling that Warner is most concerned with achieving the highest possible quality of writing instruction, whereas Piper is thinking about how a less rigorous, but “sufficient” baseline of writing instruction might be offered to a larger group of people at lower cost. Their conversation made me think of the newspapers and magazines which are responding to profit pressure and declining numbers of subscribers by resorting to AI-generated articles—often without telling their readers. Reading Autor’s characterization of where AI is taking us, I imagine he might have some sympathy for Piper’s side of this argument:
“Most people understand that mass production lowered the cost of consumer goods. The contemporary challenge is the high and rising price of essential services like healthcare, higher education and law, that are monopolized by guilds of highly educated experts. It would support and supplement judgment, thus enabling a larger set of non-elite workers to engage in high-stakes decision-making. It would simultaneously temper the monopoly power that doctors hold over medical care, lawyers over document production, software engineers over computer code, professors over undergraduate education, etc. [emphasis mine].
At my most recent place of employment, we took a cautious approach to embracing AI technology such as ChatGPT. We published guidance about its usage, declaring that it cannot be used for proposal work. However, for working on ongoing projects, that guidance stated that there might be cases in which a contract permits use. It’s clear that we will have to embrace AI in ways that help us innovate, serve clients, and remain competitive as “it…emerge[s] as a near-ubiquitous presence in our working lives” (it’s likely most of us have noticed those sentence completion suggestions in Gmail and Outlook.) At the same time, companies need to protect client data and their own data. ChatGPT trains its learning model with any information it’s provided—personal, proprietary, or otherwise—and can redistribute it to unknown parties with amoral efficiency. Well-considered guidelines will recognize the necessity of having “humans in the loop” who can make conscientious decisions about how we use ChatGPT for business.
Leave a comment