Today, Nancy Lightfoot, another one of our Pat Hoefling Grant winners, will recap her experience from the Association of American University Presses (AAUP) 2015 conference. Nancy is Project Manager and Editor at IUP.
The AAUP Annual Meeting: Business Unusual in the Service of Scholarship
We've all heard predictions that publishing is in dire straits, that people don't read anymore, and that the book is dead. If publishing in general is becoming an increasingly tough business, scholarly publishing is facing even greater challenges. A recent post I saw on a copyeditor's list—probably among the most bookish of the bookish crowds out there—was dismissive: "Nobody reads scholarly books." All too often sales figures bear that out. University of Minnesota Press Director Douglas Armato once defined a scholarly monograph as a book that doesn't make money, wryly adding that if a university press book makes any money at all it's called a trade title.
For many purveyors of intellectual content a few "tent pole" publications, movies, or performers generally prop up a very big tent full of products that they hoped would make money and didn't really end up doing so. For university presses our tent poles are our trade titles and they are not financially sturdy. We have also, by our mission, made a firm, long-term commitment to producing extremely fine scholarship that we absolutely know will lose money. Nobody is questioning our commitment to these books: they are the collective knowledge of our best scholars. They win awards, they lead scholarly fields in new directions, they intelligently examine the way we interact with the world, and they enhance our lives and the prestige of our institutions. What we need to somehow do is find a business model that will align with our mission. Proceeding as low-budget versions of commercial publishers has not gotten us where we need to be in the future of scholarship, so we need to become masters of reinvention. There was good evidence at AAUP that we already have.
A Mission Not Impossible
Among those on the copyediting list who were ready to dismiss scholarly books, there were a few who came back—mostly off-list—saying they read them and loved them. We exchanged a series of testimonials about our favorite university press books and lists and felt the glow you get when you know someone else shares your secret passion. One of the most energizing aspects of attending the AAUP was that everyone shared that same passion and sense of mission. Everyone there values scholarship: we read it, we love it, and we work long hours for low salaries because we are committed to sharing it and preserving the very best of it for the future. We are experts at sorting through information to determine what is worth sharing and saving. And we are also committed to sharing that information widely: we believe what we do is for the common good and increasingly believe in open access (OA). I was once in a staff meeting full of editors who let out a collective squeal of indignation about the New York Times asking readers to pay for gradations of access. My inner voice was indignant: You are editors! What makes you think writers and editors can work for free? But we all want our content instantly and for free these days. When it comes to scholarly publishing, whether to make content open access is no longer a question for debate; open access is inevitable and spreading fast. It is for the good of scholarship and it is part of our mission.
Vint Cerf, Internet Evangelist, Frames the Debate
An unexpected highlight of the AAUP meeting was a presentation by Vint Cerf, the vice president and Chief Internet Evangelist for Google. Cerf's talk was expansive, entertaining, and well worth watching and it nicely laid out some of the central questions occupying the academy and academic presses: How do we choose what information is saved? Is there a convergence about the format we can save it in? And who pays for the storage of collective knowledge?
For some time now, Cerf has been speaking about the Black Hole we may be leaving in the historical record in the digital age. Increasingly our carefully saved digital artifacts have become inaccessible as storage formats change (the Betamax tape of your favorite movie; the CD containing my dissertation that I can no longer read on my new MacBook, which lacks a CD drive). In addition, much of the information necessary to reconstruct our content is proprietary: the software we use to store it, the hardware we store it on, and the operating systems that run it all belong not to us, but to those who created that particular part of our intellectual content. So who owns our information coded on all these systems? The bride and groom who carefully stored their wedding memories on the now-defunct iPhoto? Or Apple? And who should bear the burden of storing and associating it all? Can proprietary software code be copied "for the common good" as we have continued to allow books to be Xeroxed?
Cerf argues that we should have the choice about whether to preserve our history and is currently evangelizing for "digital vellum," a sort of x-ray of the various formats used to store executable content that would ensure retrievability of our digital information—one version is already in development at Carnegie Mellon. When asked what he was most proud of as one of the Fathers of the Internet, Cerf mentioned the layered structure of the web and the fact that the internet deals with packet of information and is agnostic with respect to format. We are far from having home or office digital vellum readers, however, and the systems most publishers are currently dealing with are far from agnostic with respect to format; as anyone who has struggled with the latest version of Word or considered workflow options for converting to text files to e-books can attest. It is implicit in the business models of tech companies that format will be constantly changing: How else will they be able to sell us new products? But the money they're making is coming from us, and creating content that will be readable in a constantly changing ecosystem of software, operating systems, and devices is an expensive task of daunting complexity. Where do we jump to produce low-cost, high-quality, and future-proofed digital content?
The Monograph 2.0
The staple of most university press catalogs is the scholarly monograph. As of now, university tenure committees generally require a traditionally published, print monograph before granting tenure in the humanities and other fields. And while scholars need them, university presses will likely produce them. A poll of 1000 scholars by the University of California Press determined more than half of were opposed to the idea of open access monographs. There will still be paper books and proprietary content for the foreseeable future. But there are big shifts happening in the way content is being produced and pressure to change these attitudes.
As Alison Mudditt of the University of California Press announced matter-of-factly in the panel "Successful Product Development: Is 'Fail Fast' the Only Way?," in many significant ways "the scholarly monograph isn't working for anybody." Even print monographs don't make money. Not enough people read them, which means they have low usage and visibility and their ideas are not reaching the scholarly community. Scholars are chafing to try new media and interactive formats, and the print-first format isn't integrating well with the digital mainstream. The mission of university presses is to disseminate the finest scholarship, but with tightening budgets for presses and the libraries that purchase their books, the pressure is always there to choose books that will sell over—or at least among—books that contain revolutionary ideas.
The primary change as we move forward toward open access will have to be in the way the production of scholarship is funded. A problem is that the cost of producing, promoting, and preserving scholarship is not borne equally by all universities: those universities with presses are subsidizing the publication of scholarship from other institutions. That's been accepted to date, but it is not an overly popular cause to pour scarce funds into. As we factor in the cost of open access, more of the cost of producing scholarship will have to shift from readers and university presses to authors and their home institutions. The pay-to-publish model is already well-established in open access journals in the sciences, of course, where authors are commonly charged steep article-processing fees (APCs) by journals such as PLOS ONE.
The publishing subsidies authors currently receive in the humanities will not be enough to sustain the OA monograph, however, the cost of which is roughly and conservatively estimated at about $15,000. This is not a price that can reasonably be asked of junior faculty in the humanities. As a result, the University of California Press's Luminos OA monograph program emphasizes a cooperative consortium model for funding: some of the cost will continue to be borne by subsidies to university presses, some may come from subscription fees from the faculty member's home institution, and some will have to come from library subscriptions. Publication subsidies will likely have to increase—as Alison Mudditt reasonably points out, the cost will be trivial compared to the cost of setting up high-tech labs in the sciences. There is still revenue coming in from print sales, which remains the strong preference for most book buyers. A large savings and the move to OA can be simultaneously achieved by publishing monographs quickly and directly to the internet, a process that still demands investment to develop the appropriate technology. With contributions from all these revenue streams and faith in a web-first, efficient platform to decrease cost, UC Press estimates the author's title publication fee in a sustainable Luminos digital OA monograph program will need to be about $7,500. UC Press plans to offer advice and consultation for ways to come up with that fee.
Part 2 of Nancy's recap will be posted tomorrow.