- All Resources
- Search Resources
- Resources By Type
From Peter Suber’s October 2009 issue of the SPARC Open Access Newsletter.The launch of the Open Access Scholarly Publishers Association (OASPA) is a mark of movement maturity and a promise of mutual support, wisdom-sharing, and self-regulation for OA journals and OA publishers.
I want to celebrate the progress of OA journals and the launch of the OASPA by setting out what I see as the 10 greatest challenges facing OA journals. I want to do this without pretending to set the association agenda and without presupposing that association members don't already know these challenges very well. I'm not a member of the association or even a publisher. I merely want to see OA journals succeed.
(In what follows, when I say "you", I'm talking to those who edit or publish OA journals.)
I start with three disparities: the gap between journal performance and what prevailing metrics say about journal performance (#1); the gap between the vision of OA embodied in the Budapest, Bethesda, and Berlin statements and the access policies at 85% of OA journals (#2); and the gap between a journal's quality and its prestige, even when the quality is high (#3). Then I move on to seven kinds of doubt: doubts about quality (#4), preservation (#5), honesty (#6), publication fees (#7), sustainability (#8), redirection (#9), and strategy (#10).
Even when OA journals are strong, the most widely-used measurement doesn't always show them to be strong. Impact Factors (IFs) discriminate against new journals and most OA journals are new.
This is not a conspiracy. Journals are not even eligible for IFs until they are two years old, and after that Thomson Scientific only computes IFs for a select subset of journals. The need to wait two years applies equally to OA and TA journals, and the eligibility for IFs after that extends, as far as I can tell, about equally to OA and TA journals.
There are many problems with IFs, but here I want to focus on the discrimination against new journals. For my full critique, see Section 8 of this article from September 2008.
Alternative impact metrics could solve the problems left by IFs, including the discrimination against new journals. The challenge is to find metrics that are truly better for the many purposes of the many stakeholders. There's a lot of work being done in this area. For example, these are the alternative impact metrics I know about. Very likely there are even more:
* Age-weighted citation rate (Bihui Jin)
* Batting Average (Jon Kleinberg et al.)
* Distributed Open Access Reference Citation project (University of Oldenburg)
* Eigenfactor (Carl Bergstrom)
* g-index (Leo Egghe)
* h-index (J.E. Hirsch)
* Contemporary h-index (Antonis Sidiropoulos et al.)
* Individual h-index (Pablo D. Batista et al.)
* Journal Influence Index and the Paper Influence Index (Center for Journal Ranking)
* MeSUR (MEtrics from Scholarly Usage of Resources) (LANL)
* SCImago Journal Rank and SJR Indicator (University of Granada)
* Strike Rate Index (William Barendse)
* Usage Factor (UKSG)
* Web Impact Factor (Peter Ingwersen)
* y-factor (Herbert van de Sompel et al.)
I'm happy to leave the development and evaluation of these metrics to experts, and hope that some of you will be among the experts. What matters here is that we have an interest in the success of this work in progress, and that the dominance of IFs is an impediment to the growth and acceptance of OA journals.
The IF doesn't harm new journals on its own. It only harms new journals because its heavy use by promotion and tenure (P&T) committees creates a disincentive for faculty to submit work to new journals. The challenge is to monitor the alternative metrics in progress, to be ready to recognize and support superior metrics when they emerge, and to be prepared to argue their superiority to our local P&T committees. We should have allies in all the new research fields and methodologies, whose journals, even if TA, are also overwhelmingly new. Superior metrics are only half the solution; the rest is making use of them to remove disincentives to submit work to new journals.
A related challenge is to make clear that we're not looking for new metrics biased in favor of OA journals. We're looking for less biased or unbiased metrics. Only unbiased metrics can help the cause of OA journals and get the buy-in from other stakeholders needed to make them fly.
IFs may be biased against new journals, and this fact may harm more OA journals than TA journals. But IFs are not biased against OA journals that make it out of adolescence. On the contrary, because OA articles are cited more often than TA articles, OA journals can play the IF game to win.
For example, in the 2008 Journal Citation Reports, Thomson Reuters reports that five OA journals had the highest IFs in their fields:
* PLoS Neglected Tropical Diseases = first in Tropical Medicine (out of 15)
* PLoS Pathogens = first in Parasitology (out of 25)
* PLoS Computational Biology = first in Mathematical & Computational Biology (out of 28)
* PLoS Biology = first in Biology (out of 71)
* Journal of Medical Internet Research = first in Medical Informatics (out of 20)
(Thanks to Paul Peters for the alert and details.)
Optics Express, from the Optical Society of America, was formerly first in optics but dropped to third in 2008.
Even in 2004 when there were many fewer OA journals, and the oldest ones were younger than today's, Thomson Scientific's own studies showed that there was at least one OA journal in the top cohort of IFs in nearly every scientific discipline.
In 2007, Molecular Diversity Preservation International (MDPI) converted four hybrid OA journals to full OA. In 2009 it presented evidence that the conversion boosted the IFs for all four journals.
These successes are important, but they are not reasons to stick with IFs. When OA journals have high IFs, it's because they have high impact, not because they benefit from a bias built into the metric. OA journals would do as well or better under superior metrics. Even the OA journals most successful under IFs have everything to gain and nothing to lose by developing impact metrics which are more accurate, more nuanced, more timely, more automated, more fair to the variety of scholarly resources, more welcoming to the variety of data which illuminate impact, and more welcoming to new journals.
The Public Library of Science understands this well. Four of its journals have the highest IF in their fields, and yet it is pioneering the distribution of article-level impact data in order to support article-level impact metrics to displace simplistic IFs. All seven PLoS journals now share their "usage data, page views, citations, social networking links, press coverage, comments, and user ratings."
Other OA journals should do the same. So should TA journals interested in more nuanced and accurate impact metrics.
I'm not saying that if your journals do well under IFs, you shouldn't boast about it. You should. As your journals grow in impact, you should collect the evidence and boast about it. But at the same time you should work for superior metrics. Playing the IF game without working on alternatives will only encourage others to play it as well, entrenching the discrimination against new journals (as well as the other IF problems not covered here). To stop playing the IF game, you don't have to give up on impact as a goal or on impact measurements as ways of marking progress. Play for real impact accurately measured, without bias against any journals on account of their age, field, language, reputation, or business model.
How many OA journals listed in the DOAJ use some kind of CC license?
Answer = 637 out of 4,362 = 14.6% (as of October 2, 2009)
The challenge is to get more OA journals to be more open.
Let me review some terminology to help us talk about this issue. Gratis OA removes price barriers but not permission barriers. It makes content free of charge but not free of copyright or licensing restrictions. It gives users no-fee access for reading but no more reuse rights than they already had through fair use or the local equivalent. It's free as in beer, not also free as in speech. By contrast, libre OA removes price barriers and at least some permission barriers. It lifts at least some copyright and licensing restrictions and permits at least some uses beyond fair use. It's free in beer and free as in speech.
In most countries in the world today, new writings fall under all-rights-reserved copyrights from the moment of birth. As a result, authors can only provide libre OA to their work by affirmatively waiving some of their rights. If you're not sure how to waive some rights without losing your ability to enforce others that you retain, don't worry. Creative Commons licenses are designed for just this job. There's more than one CC license because there's more right in the copyright bundle that you might want to waive. For the same reason, libre OA is a range of things, and the range of CC licenses correspond to the most commonly adopted positions within that range. Apart from assignment to the public domain (waiving all rights), the most open CC license is CC-BY, which waives all rights except the right of attribution. The CC-BY-NC, by contrast, waives all rights except the right of attribution and the right to block or control commercial reuse.
OASPA recommends the CC-BY license or equivalent for all its members. It accepts some less open licenses, such as the CC-BY-NC, but it does not recommend them. In any case, both provide libre OA. OASPA also *requires* that members who want to impose any restrictions on reuse must make the restrictions explicit, for example through a license. (See the OASPA bylaws, Appendix II.)
The SPARC Europe Seal program requires the CC-BY license and libre OA.
SURF recommends the "the most liberal Creative Commons licence" for articles, which is CC-BY. For data it recommends the more liberal assignment to the public domain, as required by the Science Commons Protocol for Implementing Open Access Data. On both fronts it recommends libre OA.
The Budapest, Bethesda, and Berlin statements all call for libre OA, without naming specific licenses.
The challenge is that more than 85% OA journals now in the DOAJ don't use any kind of CC license. Some of these might use equivalent non-CC licenses and some may use homegrown language with a similar legal effect. But these exceptions are rare. No matter how you slice it, most OA journals are not using open licenses. Most are operating under all-rights-reserved copyrights and leaving their users with no more than what they already had under fair use. Most are not offering libre OA.
(Footnote: Just to clarify some points often misunderstood: All CC licenses provide libre OA, although some are more open than others. But no CC licenses are required for libre OA. You can provide libre OA through an equivalent non-CC license or homegrown language instead, if you like. But libre OA requires some kind of open license. When a copyrighted text doesn't use any sort of open license, then it falls under an all-rights-reserved copyright, which shrinks user rights down to fair use and rules out libre OA.)
If 14.6% of the journals in the DOAJ use some kind of CC license, how many use the CC-BY license recommended by OASPA, SPARC Europe, and SURF?
Answer = 416 out of 4,362 = 9.5% (as of October 2, 2009)
How many use CC licenses other than CC-BY? Answer = 221 out of 4,362 = 5.1%
(Footnote: The DOAJ doesn't actually count journals with CC-BY licenses. It counts journals with the SPARC Europe Seal, which requires CC-BY licenses. But the Seal also requires journals to share metadata in a certain way. Hence, it's possible for many journals to use CC-BY and fail to earn the Seal because they don't share their metadata appropriately. In that case the SPARC Seal tally would undercount the journals using CC-BY. But in fact, many more DOAJ journals share their metadata than use CC-BY, making the Seal tally a good approximation to a CC-BY tally. Thanks to Lars Björnshauge for this detail.)
I know that many OA journals want to restrict commercial reuse and resist the recommendations to use CC-BY. I don't want to enter that debate here. If you want to restrict commercial reuse, then use a CC-BY-NC license or equivalent rather than no open license at all. Join the 5% of DOAJ journals which have adopted that solution.
I once talked to an OA journal publisher who feared that open licenses would deter submissions. It turned out he feared that allowing commercial use would deter submissions and assumed that all open licenses allowed commercial use. Moreover, he only had a few anecdotes to support his theory about the effect on submissions. But whether or not his fear was groundless, there's a simple way to use an open license and restrict commercial use. Just use a CC-BY-NC or equivalent. Let's agree to move beyond an all-rights-reserved copyright to libre OA, and argue later about whether to move beyond CC-BY-NC to CC-BY.
If a journal is already free of charge, then why use any open license at all? Why move beyond gratis OA to libre OA?
The short answer is to spare users the delay and expense of seeking permission whenever they want to exceed fair use.
Why would users want to exceed fair use? Here are some of the answers I gave to Richard Poynder in October 2007 (p. 37):
[T]here are good reasons to exceed fair use, for example, to quote long excerpts, print full-text copies, email copies to students or colleagues, burn copies on CDs for bandwidth-poor parts of the world, distribute semantically-tagged or otherwise enhanced versions of a text, migrate copies to new formats or media to keep a text readable as technologies change, archive copies for preservation, include the work in a database or mashup, copy the text for indexing [or] text-mining..., make an audio recording of the text, or translate it into another language.
Fortunately it's easy for OA journals to adopt open licenses and permit these uses in advance. It's easy free their users. It's easy to provide libre OA. In fact, it's easier for OA journals than OA repositories (or easier for gold OA than green OA). OA repositories generally stick to gratis OA because they can't generate the needed permissions for libre OA on their own. But OA journals can generate the permissions on their own.
Some OA journals know that CC licenses are free of charge and easy to use. Their reservations are less straightforward. I once talked to an OA journal editor-publisher who said: "Our journal is already free of charge. What else would anyone want? If users want to copy and redistribute the files, they are free to do so. We don't care. We allow that."
How would users know that the journal allows them to copy and redistribute the files? Fair use does *not* allow that, and conscientious users will limit themselves to fair use. When they don't have permission to exceed fair use, they will either slow down to ask permission or err on the side of non-use. These are impediments to research that OA was designed to remove.
Some people believe that, for better or worse, conscientiousness about copyright is a scruple fading into quaintness, like sexual prudery. But it remains the formal policy of every university and library in the world. Even when individuals are not conscientious, their institutions are, and they require their affiliated users to be. Even if you can wink at individuals to let them know that they may do what they want with your files, institutions are not allowed to take the hint. They can exercise their rights under libre OA, when they know they have them. But they can't act on a wink alone. And if you really are willing to provide libre OA, then there's no reason whatsoever not to say so in an explicit statement or license.
Even when users want to do something allowed by fair use, they have to deal with the fact that fair use itself is vague and contestable. For example, informed people disagree about whether it covers text-mining. Another reason to use an open license, then, is to free users from the fear of liability and from self-imposed restrictions arising from that fear. This will benefit not just the users who need libre OA, but also benefits the users who want to do something lawful, though not widely known to be lawful, under fair use.
Don't make conscientious users choose between the delay of seeking permission and the risk of proceeding without it. Don't make them ask permission. Don't make them pay for permission. Don't make them err on the side of non-use. Don't increase the pressure to make them less conscientious. Don't free your users to exceed fair use without telling them that they are free to exceed fair use. If you think you're providing libre OA but don't use an open license or equivalent language, then you're not providing libre OA. You're using an all-rights-reserved copyright, perhaps yoked with a private resolve not to enforce it, forcing conscientious to seek permission. Your private resolve not to enforce your legal rights doesn't free anyone from the need to learn what their own rights are. But your public resolve through an open license will do so simply and elegantly.
Here I mean closing the gap between a journal's quality and prestige. New journals can be excellent from birth, but even the best cannot be prestigious from birth. This annoyance rises to the level of a serious challenge when we understand that submissions follow journal prestige more than journal quality, when the two differ.
This is another dimension of the burden facing new journals, aggravated by the fact that new journals don't have IFs. Promotion and tenure committees tend to treat journal IFs as a major component of journal prestige, and to use both IFs and prestige as surrogates for article quality and author quality.
I used to think it was just a matter of time before high-quality OA journals earned prestige in proportion to their quality. I'm less sure now. As I argued in an article last year, limited library budgets (for "must-have" journals) and limited human attention (for journal brands) mean that journal prestige is closer to a zero-sum game than journal quality. Incumbents with a headstart have a powerful advantage.
New journals trying to close the prestige-quality gap already know the basic strategies. Recruit eminent authors. Recruit eminent editors. Earn high impact (and boast about the evidence). Reach out to the media with the newsworthy discoveries or analyses you publish. Wait, endure, and persist. Unfortunately, even if time is not sufficient, it is still necessary. Even if you're doing all you can, this one remains a continuing challenge.
The only effective shortcut I know is not available to existing OA journals, namely, to convert existing high-prestige TA journals to OA. When a journal converts, it carries its prestige with it, along with its reputation, IF, quality, standards, editors, and readers.
The prestige-quality gap affects new OA and new TA journals equally. If there's a difference, it's that a larger percentage of OA journals are new and face the challenge of closing the gap. Conversely, a larger percentage of TA journals have already closed the gap and earned prestige commensurate with their quality. Because this has been the case since long before the emergence of OA journals, institutions sensitive to journal prestige, such as university promotion and tenure committees and funding agency award committees, tend to focus on TA journals.
When we can, we should try to make quality count for more and prestige count for less. But we can omit those strategies here because, at best, they wouldn't close the prestige-quality gap. They would merely prevent it from harming authors and journals. For some elaboration, however, see Section 8 of my 2008 essay above.
Likewise, when there is no high-prestige OA journal in a field, authors who can publish in high-prestige TA journals can usually deposit copies of their peer-reviewed manuscripts in OA repositories. In that sense, green OA means that authors rarely have to choose between journal prestige and OA. But this does not close the gap for gold OA, or prevent it from harming OA journals; it merely prevents it from harming authors.
Even though peer-reviewed OA journals have been around now for more than 20 years, some academics believe that OA journals bypass peer review. Some believe that OA journals skimp on peer review. Some believe that publication fees corrupt peer review. Some confuse lack of prestige with lack of quality. ("If I haven't heard of it, how good could it be?") Some generalize too quickly from weak journals. ("I'm skeptical about these new OA journals. I saw one the other day that was second-rate.")
All these judgments are hasty. The most common question I ask myself as I sift through the daily debates is this: "Can this Ph.D. be as careless in his research as he is in this comment?" But even if false and hasty, the judgments exist. That's the challenge.
One solution is to point to high-quality OA journals. You don't have to argue that they are more numerous than they are. The fact that they do exist proves that they can exist. If some OA journals are strong and some are weak, then we must be a bit more empirical. We have to look and see and see before we judge, just as we do for TA journals. We must move beyond prejudice, impressionism, rumor, and stereotype to discrimination.
It helps to point out that the range of OA journals is just like the range of TA journals: some are gems and some are clunkers. It may sound obvious, but in defending OA journal quality, only defend the quality journals. If you defend weak journals on the ground that they are OA, you give the impression that you will forgive a journal any lapses provided it is OA. You may even be able to defend your forgiveness, but you will only feed doubts about quality. The best way to defend OA journals as a class is not to defend them all.
Another strategy is to distinguish quality from prestige, or real excellence from reputed excellence. We all understand this distinction once it's pointed out. But all too often we're accustomed to use prestige as a surrogate for quality and need to be reminded of the differences.
Similarly, distinguish quality from IFs. The differences are just as real and just as generally unacknowledged as those between quality and prestige.
Point out that the primary factors affecting journal quality are independent of the journal's access policy: the quality of its authors, editors, and referees. Point out that some factors favor OA journals. For example, when an OA journal has many excellent submissions, then it needn't reject any for reasons of space. When an OA journal has few excellent submissions, then it can publish a short issue rather than lower standards to fill out an issue. By contrast, TA journals can't publish short issues without short-changing subscribers, pointing up one of the hidden blessings of having no subscribers.
Unfortunately these strategies all require conversation with the doubters. The challenge is not that these conversations go badly when they take place, but that they don't take place often enough.
BTW, it would be a mistake to underestimate the challenge of these doubts on the ground that careful and informed people do not make hasty generalizations. Many people are careless or uninformed, including many careful people who are too busy to inform themselves.
"Digital documents last forever --or five years, whichever comes first."
--Jeff Rothenberg, 2001
The more important the work you publish, the more important it is to preserve it.
Fortunately, serious preservation options are neither expensive nor difficult, at least for the journals being preserved, as opposed to the services preserving them. Several of the best are free of charge, both for the journals and for users who want to access the content.
If you held off on making a preservation plan, fearing the expense and complexity, it's not too late to adopt one now. I almost said "it's never too late", but that is exactly what we cannot say.
E-only TA journals face the same challenge. The difference is that TA journals have more options because they needn't insist on preservation methods that provide OA to the preserved content.
I recommend LOCKSS (OA at least for the hosting institutions) and the DOAJ preservation system (OA for all). Both are free of charge for the journals.
Another free and stable option is to deposit your digital content with a trusted, financially secure, and well-equipped meatspace library. For example, BioMed Central has been depositing its digital output since 2003 in the National Library of the Netherlands, just as the DOAJ preservation program does today.
Many other publishers deposit their digital contents in the British Library.
National libraries and university libraries with digital preservation programs could render a great service by opening their vaults to peer-reviewed ejournals (OA and TA).
BioMed Central and the Public Library of Science routinely deposit their articles in PubMed Central. This greatly increases the likelihood that users will be able to find them again, and find them OA, if the publishers went out of business. This is smart and easy. I recommend routine deposits in a suitable repository for all OA journal publishers.
(Similarly, I recommend that authors avoid hybrid OA journals which do not allow deposits in a repository independent of the publisher.)
However, repository deposit should not be a journal's only preservation plan. Peter Hirtle has shown why.
But repository deposit can still provide an extra layer of security for ejournals, both before they work out long-term solutions and then later alongside long-term solutions. Among other benefits, this can free new OA projects from the delay of making ironclad preservation a precondition of launch.
A decisive way to answer doubts about digital preservation is to make printouts and preserve the paper. As I argued in 2002:
So far, paper is the only commonly used medium that we know can preserve texts for hundreds of years. There are many creative methods emerging for storing digital texts electronically with at least the security of paper....The only problem is that it will take hundreds of years to monitor the outcome of present-day experiments. But we don't have to choose between insecure storage and retreat from the digital revolution: the shortcut to preservation is to print digital texts on paper....Preservation in the digital era [can] be as good as paper, just as it was before the digital era.
Microtome is one company that offers paper-based preservation to OA journals.
Finally, let me repeat the point from #2 that libre OA facilitates preservation by granting permission in advance for migrating digital files to new formats and media to keep them readable as technology changes, and for copying files for multiple deposits. In fact, the built-in reasons why repositories (as opposed to journals) find it difficult for to offer libre OA lie at the basis of Hirtle's argument that repositories are inadequate for preservation.
In some countries, like the US, copying for preservation is allowed by a special provision of copyright law. But this sort of copying is not allowed in all countries, and not part of fair use. Without this permission from the statute or an open license, we could only preserve a mass of content by hunting down each individual copyright holders to ask permission. The expense and delay can be deal-breakers. If you're concerned enough about the future of your content to *want* to preserve it, then you should be concerned enough to facilitate that preservation with an open license.
Some friends of OA object that preservation is not intrinsically part of providing OA. That's true. But the same could be said of peer review and grammatical sentences. "Separate" doesn't mean "unimportant" or "incompatible". Think of preservation as a separate essential, like truth, clarity, and punctuation. You can provide OA without it, but you shouldn't want to.
Are OA journals a scam? Are *fee-based* OA journals a scam? Are *some* fee-based OA journals scams? Do *some observers believe* that some fee-based OA journals are scams? Does this *belief* harm OA journals as a class?
Although you edit or publish OA journals yourself, you probably gave one, two, or three "yes" answers to these five questions. That's the challenge.
Here are some of the suspicious practices which give rise to doubts about a journal's honesty:
* Providing little or no evidence of peer review
* Spamming researchers
* Claiming vaporware titles to be published journals
* Hiding the names of editors
* Hiding the names of owners
* Hiding the journal's business address
For this purpose the problem with spam is not the annoyance of unsolicited email. The problem is carelessness on one of the points where we'd expect serious, peer-reviewed journals to be careful. When a journal asks you to submit work or join an editorial board, you're flattered because you respect its discriminating judgment. It doesn't make this invitation to just anyone. But when a journal tries to flatter you in this way and gets your field comically wrong, you know it knows nothing about you. You know it's making this invitation to just about everyone. If it knows nothing about you, why does it want you to join its editorial board? Did it fail to do its homework on something this basic, or did it not really care? Which would be worse?
The problem with vaporware is not premature announcement, but confusion about the purpose of an announcement. All new journals go through stages when they haven't yet named their editors or published any articles. It's hard to know when along this continuum to say that the journal "exists" or has "launched", or when to list the title on the publisher's web site. Honest journals can draw the line at different places. But in my experience, journals with other suspicious practices tend to draw the line unusually early in the process and tend to do it for dozens or hundreds of titles at once, as if they thought it more important to impress the reader with the ambition of the list than the working operation of any individual journal.
The challenge behind this challenge is that we rarely have more than grounds for suspicion. We'll often have doubts about our doubts about a journal's honesty. In my own mind, it's important to leave space to distinguish a scam from a clumsy start-up. An entirely honest but clumsy start-up might announce titles far in advance of their content and forget to disclose the editors or owners. My recommendation is two-sided. On the one hand, don't be the last to criticize dishonest practices and low standards. The longer you hesitate, the more it appears that you will overlook a journal's vices as long as it is OA. (Think about Republicans who hesitate to criticize a dishonest fellow Republican and Democrats who hesitate to criticize a dishonest fellow Democrat.) On the other hand, don't create a hostile or unwelcoming environment for new start-ups.
These two recommendations are compatible. But while one says "don't be the last to criticize", the other doesn't quite say "don't be the first". Your experience as an editor or publisher may help you know sooner than others that a certain new initiative is engaged in deception. Just don't pounce until you're confident that you're dealing with deceptive practices rather than clumsiness and inexperience. Or pounce differently in the two cases. Even clumsy but honest start-ups need a reminder about professional standards, even if they don't deserve condemnation for ethical lapses. Don't let ours become a revolution that eats its own children.
The OASPA code of conduct is a beacon here. Not only does it say the right things: disclose your peer review process, your contact info, your fees, and don't spam. It is the voice of OA publishers themselves, not critics of OA publishers. It shows that OA publishers are willing to articulate these norms and willing to enforce them. It's public self-regulation. It's available for supporters, critics, and start-ups to consult it as an emerging standard.
Remember the Davis/Anderson hoax from June 2009. Phil Davis and Kent Anderson submitted five pages of computer-generated nonsense to The Open Information Science Journal (TOISCIJ), published by Bentham Open. The journal accepted the paper and sent the authors a bill for $800.
The incident hurt OA journals in one respect and helped in another. It hurt by feeding hasty generalizations about OA journals and the business model of charging author-side publication fees, though Davis and Anderson themselves did not draw these hasty generalizations. (More on doubts about publication fees in #7 below.)
The hoax made all OA journals look bad. You might quarrel with the word "all". Not all OA journals charge publication fees. Not all OA journals that do charge fees take the money and fail to deliver honest peer review, or even a cursory human glance. True and true. The actual number of journals like TOISCIJ is very small. But most people who hear about the Davis/Anderson hoax don't understand the distinctions among OA journals, just as most people who heard about the 1996 Sokal hoax didn't understand the distinctions among cultural studies journals or even among humanities journals. Jumping to the conclusion that the problem lies with OA as such or publication fees as such is not justified and not fair. But that's the challenge.
By contrast, TA journal scams –-like the nine fake journals published by Elsevier–- seldom trigger generalizations about the faults of TA journals as such. From long familiarity, most academics have learned to discriminate among TA journals. But most are still learning to discriminate among OA journals.
On the other side, the Davis/Anderson hoax helped OA journals by drawing humiliating attention to embarrassing behavior. It should deter similar behavior. Whether you consider the TOISCIJ to have been guilty of dishonesty or incompetence, other journals dishonest or incompetent in the same way will have to worry that they could be next. We can't know how widely the deterrent effect will operate or for how long. It won't drive all dishonest journals from the field, even if we wish it would. But it does help.
Some critics believe that author-side publication fees corrupt peer review, an idea which puts honest journals under a cloud. Some believe that the fees must be paid by authors out of pocket, an idea which scares authors in nearly every field and every country.
We can respond to the corruption charge by pointing to the financial firewalls in place at fee-based OA journals. Outsiders can't be expected to know about them, and journals have to take the initiative to explain the systems they have in place.
We can respond to the author fear by educating authors about sources and sponsors willing to pay publication fees on their behalf. In March 2009 the Research Information Network urged universities and funding agencies willing to pay these fees to make their willingness clear to authors. If they don't, too many authors will continue to assume that "author fees" are to be paid by authors out of pocket.
For universities willing to pay fees on behalf of faculty, see the list at the Open Access Directory (a wiki you can update).
For funding agencies willing to pay fees on behalf of grantees, see the (short) list at BioMed Central and its (longer) table of policies.
But clarity about the availability of funds is a step best taken by universities and funders themselves. One thing you can do as journals and publishers is to be more careful with your language. "Author pays" and "author fees" are deeply misleading terms. They are false for the majority of OA journals which charge no fees at all, and false in all cases when fees will be paid by the author's funder or employer or waived on grounds of economic hardship. "Publication fees" and "processing fees" are both less frightening and more accurate. If you want to mention authors, for symmetry with reader-side subscription fees, then "author-side fees" is less frightening and more accurate than "author fees". Terminology matters, especially when so many people have only a tenuous grasp of what OA is and how it's funded.
Clearly I'm not recommending that you look for a euphemism to hide or disguise the existence of fees when you charge fees. Everyone can tell that a "publication fee" is a fee. (Moreover, hiding your fees would violate the OASPA code of conduct!) But don't mislead in the other direction either. It's misleading to leave the impression that fees must be paid by authors when they *need not be paid by authors*.
If author-side fees intrinsically corrupt peer review, then then corruption is more widespread at TA journals than OA journals. The reason is simply that many more TA journals charge author-side fees than OA journals, both in percentages and absolute numbers. The Kaufman-Wills report in October 2005 found that 75% of TA journals charged author-side fees, compared to only 47% of OA journals. Stuart Shieber's study in May 2009 lowered the number for OA journals to 29.7%. But as far as I know, no new study has updated the number for TA journals.
Kaufman-Wills study, October 2005
Used properly this argument is not a "tu quoque" (deflecting criticism on the ground that "you too" are doing the same thing). It's a consciousness raiser and a goad to be more empirical. It doesn't ask critics to forgive a practice at OA journals just because it also takes place at TA journals. The practice might cause problems at both. Instead the argument asks, "Do you really think that author-side fees *intrinsically* corrupt peer review? Or, realizing that this affects 75% of subscription-based journals, do you want to acknowledge that sometimes fees corrupt and sometimes they don't, back down from the flat generalization, put on your empiricist glasses, and look for individual signs and symptoms of corruption and evolving best practices about safeguards?"
Likewise, if the incentive to generate more revenue leads journals to publish more articles and lower their standards, then the problem is worse at TA journals than OA journals. The reason in this case is that TA subscription fees contain high profit margins much more often than OA publication fees. Most OA publication fees are subsistence-level. When a TA journal increases its volume to justify a price increase, it will typically clear much more than an OA journal which increases its volume in order to bring in more publication fees.
Unfamiliarity with OA journals leads many academics to frame an unconscious dilemma: "OA journals couldn't possibly pay their bills. They won't last long. But if they can pay their bills, they must not be doing much work. Their quality must be low."
As usual, one strategy is education. Point out the OA journals and publishers making profits or surpluses --for example, BioMed Central, Hindawi, Medknow, Optics Express, PLoS ONE.
Distinguish the long-term outlook from the present transition period. Long-term the money needed to support peer-reviewed OA journals in every research niche is locked up supporting peer-reviewed TA journals. The money is already in the system. If comparatively little is spent today on OA journals, that says much more about the history of journals (in which TA journals arrived long before OA journals) than about the sustainability of OA journals.
Monopolistic profit margins above 25-30% at the largest publishers show that the amount we currently pay for peer-reviewed journals is significantly more than the cost of producing peer-reviewed journals. That gap should reassure us about the long-term sustainability of OA journals, even if it says nothing about navigating the transition period.
The sustainability of *subscription* journals is far from clear after nearly four decades of raising average prices faster than inflation. When the University of California studied the question, its concluded that "[t]he economics of [subscription-based] scholarly journal publishing are incontrovertibly unsustainable."
With care this argument needn't be a "tu quoque" either. (We can't deflect doubts about the sustainability of OA journals by raising doubts about the sustainability of TA journals; perhaps both are unsustainable.) It's a goad to put questions about the sustainability of OA journals into a larger context. What is our standard of sustainability? Do we need a guarantee before we support a promising new idea? Are we more interested in whether the average OA journal can meet a given standard, per se, or whether the average OA journal can meet a given standard as well as the average TA journal?
The situation is much like the two hikers in the mountains who came around a bend and encountered a bear. One dropped to the ground, pulled off his hiking boots, and began putting on a pair of running shoes. "What are you doing?" the second hiker asked. "You can't outrun a bear." The first replied, "I don't have to outrun the bear. I only have to outrun you."
If a journal must survive a certain number of years to prove its sustainability, then no new journals can supply the proof, OA or TA, just as we can't know that digital files can be preserved for 100 years until 100 years have passed. But if a new journal uses a model that other journals have used successfully to pay their bills, perhaps with some surplus thrown in, then new OA journals can meet the standard as well as new TA journals.
If your OA journals are breaking even or making a surplus, consider sharing your business data. It would answer doubts about sustainability. More important, it would help other OA journals learn from your experience. Let's be more intentional about sharing hard-won tips, warnings, and best practices.
I'd like to see the OASPA coordinate a kind of journal buddy system in which experienced journals help inexperienced journals work out sustainable business models and answer the kinds of rubber-on-the-road questions that come up in the daily management of a journal. If there are many basic models and variations, and if the landscape differs from field to field and country to country (and decade to decade), then experienced journals can help guide newer journals through the landscape. Even if journals are not willing to share their business data with the public, they should be willing to share them in confidence with OASPA-buddies.
At the Open Access Directory, I've been trying to document the OA journal business models actually in use. The idea is to show that there is more than one, and to show which ones are in use where. The variety should help critics avoid generalization, help new journals find models to suit their niche, and stimulate everyone's imagination to conceive other models and other variations on the existing themes. But so far I've been working almost alone on the list. If you can help, please do. It's a wiki.
Finally, don't dismiss questions about sustainability. They are not code for opposition, any more than questions about preservation are code for opposition. Like preservation, the importance of sustainability rises in direct proportion to the importance of the work you are publishing.
Suppose you're persuaded that most of the money needed to support OA journals is tied up supporting TA journals and that redirecting funds would suffice. You may well wonder: Will it happen? If so, how? What can we do to nudge it along?
(Footnote. Why say that "most" rather than "all" the money needed to support OA journals is tied up in TA journals? Because some of the money needed to support OA journals is *not* tied up in TA journals and already flows to OA journals. Also note, however, that even if we reach a world in which most journals are OA, and all are adequately funded, some TA journals will almost certainly coexist with OA journals and attract some money to themselves.)
Redirection is already happening on a small scale. Whenever a TA journal converts to OA, we're seeing small-scale redirection.
OA journal funds at universities are potential examples. Most of the current funds are adding new money to the system. But if they last long enough and grow large enough to face the replenishment problem, and if universities make them a priority for any funds saved from the cancellation or conversion of TA journals, then the funds will become redirection devices.
Redirection is also taking place on a large scale, primarily through CERN's SCOAP3 project (Sponsoring Consortium for Open Access Publishing in Particle Physics). SCOAP3 is an ambitious plan to convert all the major TA journals in particle physics to OA, and redirect the money formerly spent on reader-side subscription fees to author-side publication fees. It's being worked out in a large-scale negotiation with all the stakeholders, including publishers.
First let's help SCOAP3 work in particle physics. Then when it's working, let's study why it works, with an eye on transferring the model to other fields. Some factors in its success will be physics-specific, such as the small number of targeted journals, the green OA arXiv culture embraced even by TA publishers in the field, and the dominance of CERN. Other factors will not be physics-specific, such as the win-win logic which appeals, or could appeal, to research institutions, libraries, funders, and publishers. It's the win-win logic which makes me think that other fields might pull this off without a CERN. If other fields can do this, they won't need CERN-like money or dominance so much as CERN-like convening power. If they can bring the stakeholders together to discuss the idea, then the win-win logic has a chance to take over from there.
SCOAP3 is the not only most ambitious redirection project now under way. It's also the furthest along and the most refined by reality-tests. (There's nothing like asking an institution to commit money to bring out its latent doubts and test your ability to answer them.) If it's not the best model for some other fields, then let's figure that out and think hard about other large-scale redirection strategies alongside SCOAP3. But if it could work in other fields, we'd lose a precious opportunity if we didn't apply the lessons of SCOAP3 everywhere they are applicable.
We can't underestimate rising levels of green OA as another impetus to large-scale redirection. This is one more reason for OA journals and publishers to support green OA mandates at universities and funding agencies, and oppose publisher trade associations who lobby against them. But we should also realize that rising green OA levels may have no effect on redirection. In physics, the highest levels and longest history of OA archiving have not triggered TA journal cancellations or freed up money which could be redirected to OA journals; but they have led indirectly to SCOAP3.
If rising levels of green OA do force redirection in fields outside physics, it will only be after a period of antagonism and polarization like nothing we've seen to date. Hence, while we continue to work unabated for green OA, let's also try to understand whether a faster and more frictionless method like SCOAP3 could work in fields outside physics.
Mark Rowse, former CEO of Ingenta, sketched another strategy for large-scale redirection in December 2003. A publisher could "flip" its TA journals to OA at one stroke by reinterpreting the payments it receives from university libraries as OA publication fees for a group of authors rather than TA subscription fees for a group of readers. One advantage over SCOAP3 is that the Rowsean flip can be tried one journal or one publisher at a time, and doesn't require discipline-wide coordination. But at the same time, it can scale up to the largest publishers or the largest coalitions of publishers. If the Rowsean flip hasn't been tried in its pure form by any publisher, several ongoing projects have Rowsean characteristics, such as SCOAP3 itself, the Stanford Encyclopedia of Philosophy, and the Springer deals with some universities and libraries to include OA publication fees in the price of subscriptions.
For more detail on the Rowsean flip, see my elaboration of the idea from October 2007.
We have to be imaginative but we don't have to improvise. There are some principles we can try to follow. Money freed up by the cancellation or conversion of peer-reviewed TA journals should be spent first on peer-reviewed OA journals. Large-scale redirection is more efficient than small-scale redirection. Peaceful revolution through negotiation and self-interest is more amicable and potentially more productive than adaptation forced by falling asteroids.
(For more on the need to redirect savings from the cancellation and conversion of TA journals to OA journals, see my article from September 2007, esp. Section 13.)
For the record, in case it doesn't go without saying, I advocate redirecting money freed up by cancellations or conversions, not cancelling journals in order to free up the money. This may look like hair-splitting, but the difference is neither small nor subtle. It's roughly the difference between having great expectations and planning to kill your parents.
Here I mean doubts about strategies for addressing the other challenges to OA journals. These are doubts you might have yourself, not the doubts of others.
You're keenly aware of the challenges facing OA journals and deal with them every day. The strategies for doing so seem to fall into into two large families.
The first is to cultivate your own garden. Prove that OA journals can publish good work by publishing good work. Prove that OA journals can survive by surviving. Prove that OA journals can operate with rigor and integrity by operating with rigor and integrity. Silence doubts and disarm criticism by succeeding.
The second is activism. Engage in the debate and converse with doubters. Answer objections and misunderstandings. Share your data and experience, offer direct assistance to buddy-journals, and work for more fee-paying funds, better metrics, and redirection.
If you can't do both, then choose gardening. If you have to withdraw from public discussion and keep your head down in order to make your journal succeed, then do that. We need your success. But if you don't have to withdraw from public discussion in order to make your journal succeed, then I hope you'll take part in it. When the TA publishing lobby makes its blanket assertions that threats to TA journals are threats to peer review itself, that journal publishers require exclusive rights, or that everyone who needs access already has access, your response as a journal publisher or journal editor will have more authority than the same responses from other players. Journal insiders have a unique opportunity in this debate and should seize it more often.
For example, see the editors of The Lancet disavowing the tactics used by the Association of American Publishers in opposing the NIH policy, October 2004,
Or see Cambridge University Press, Cold Spring Harbor Laboratory Press, Columbia University Press, MIT Press, Nature Publishing Group, Oxford University Press, Pennsylvania State University Press, Rockefeller University Press, and the University of Chicago Press disavowing or distancing themselves from the deceptive PRISM campaign, October 2007.
Or see Mike Rossner and the Rockefeller University Press disavowing the American Association of University Presses support for the Conyers bill, September 2008.
But I don't want to draw too sharp a distinction between gardening and activism.
Even if you want to focus on cultivating your own garden, you will recognize that misunderstandings about OA will limit your submissions and that the failures of other OA journals (ethical and financial) will harm your reputation. Helping to spread the truth about OA journals as a class will help your OA journals individually. Debunking myths about OA journals will help you recruit submissions and gain the support of promotion and tenure committees. Even the narrow goal to publish good work and survive might require some activism.
Conversely, reputable OA journals can lift the reputations of other OA journals. If you publish good work and survive, your success will directly support other OA journals and advance the debate. Examples are more persuasive than arguments. The most harmful objections to OA journals allege that something intrinsic to OA limits their quality, their integrity, or their financial viability. When you succeed, you prove that is false.
* Postscript. This is an expanded, full-prose version of my keynote address last month at the 1st Conference on Open Access Scholarly Publishing (Lund, September 14-16, 2009).