BIBLIOMETRICS AND OPEN ACCESS: FIGHTING FOR COMMON SENSE

My topic today is the corporate hold on academic research on two different but closely interrelated fronts: open access and bibliometrics. Open access policies are very simple to understand: the publications generated by research funded with public money should be available for free to anyone interested. This is, simply, not happening. Bibliometrics used to be a system designed to aid university librarians to choose how to invest their meagre, or large, resources into the best journals available but became about ten years ago an Orwellian way of measuring what cannot be measured: scholarly reputation and impact.

I attended back in 2010 a one-day workshop, organized by the Catalan Agència per a la Qualitat del Sistema Universitari de Catalunya (AQU), to debate the best way to implement the, at the time, rather new bibliometric approach to research. I was on the side of the Catalan researchers who complained that if you work on a tiny corner of the world of knowledge (and in a minority language) you can hardly expect your research to have world-wide impact. Your specialized journals will always be on the C and D list, even for your own local Catalan universities. So why measure not only personal production but also whole areas of research by pitting them against each other? Who has the right to say that a journal in English about Milton is more relevant that one in Catalan about Pedrolo? Why should an article published in the former be automatically regarded as superior to one published in the latter?

The other main concern expressed had to do with how bibliometrics negatively affect new publications, as scholars have quickly learned that since newly-born journals take time to consolidate it is preferable to try to publish in older, fully consolidated B or A-list journals–the only ones that really count for assessment though it make take years to publish in them even when accepted. I believed all this was plain common sense but was totally flabbergasted when hearing the line defended by some of the Catalan colleagues present at the AQU workshop, who were truly convinced that where you publish and not what you publish is what matters. Since then I have learned to do as required by my employers, and, so, I made sure that my last research assessment exercise included at least one article in an A-listed journal. I am also flooding, however, the digital repository of my university with plenty of academic work which I am self-publishing, following my own version of open access.

I say my own version because what is usually meant by open access is not free self-publication, whether peer-reviewed (which can be easily done) or not, but the online liberation of work previously offered through an academic journal (occasionally in collective books). That is to say: even though most universities have set up digital repositories to guarantee their researchers easy access to a platform where they can publish their work (beyond Academia.edu or Research Gate, which are private), it turns out that this has had no major impact because our CVs are measured, more rigidly than ever, on the basis of journal bibliometrics. If, to give an imaginary example, I publish an article in an A-ranking journal available by subscription that gets read by 30 persons but the same article is downloaded 300 times from the DDD of my university, which has the higher impact? You might think it’s the DDD but, no, it’s the journal publication–officially, digital repositories contribute zero to academic CVs. I am not speaking here of peer-reviewing vs. self-publication, I’m speaking only about access, which is supposedly the basis of impact.

Open access, in short, cannot function unless all journals decide to act like repositories and offer their publications for free online. Many, of course, are doing that and even using, besides, open peer reviewing, which means that you can leave comments either as a plain reader or as a formal reviewer (not anonymously). In contrast, most of the A- listed journals (highly-ranked according to bibliometrics, not necessarily scholarly consensus) tend to be available only by subscription, which means that universities are spending most of their library budgets on publications that actually depend on the researchers’ giving their work away for free. As we all know, though we do not get payment for our articles, the main academic journal publishers do good business by charging money for each article independently and for the subscriptions, some truly expensive–I mean up to tens of thousands of dollars for one journal (in the sciences).

A recent article in The Guardian complains that the European Union, in charge of guaranteeing the growth of open access policies, has hired academic giant Elsevier to check its progress. As the author, Jon Tennant, protests, “That’s like having McDonald’s monitor the eating habits of a nation and then using that to guide policy decisions” (https://www.theguardian.com/science/political-science/2018/jun/29/elsevier-are-corrupting-open-science-in-europe). Elsevier, naturally, very much disliked at the critique. See in Tennant’s own blog the letter that Elsevier sent him, defending their appointment, and his arguments (http://fossilsandshit.com/elsevier-open-science-monitor-response/).

Business is business and corporations will do their best to go on accruing power over us, academics, as well as they can–just as Amazon, Apple and company do. What you should be wondering at this point is why this state of affairs is tolerated. If most of us, researchers, agree that open access is the way to go, why is this so hard to implement? Well, one answer is that open access is not free in the sense that if you want to set up a respectable online journal you still need extensive resources: a platform funded by your university, the know-how to operate it as editor (a time-consuming task), and lots of stamina to send regular cfps and manage peer reviewers, that unruly lot. It seems easier to let others do the job or, to be more specific, help to give the job you’re anyway willing to do more resonance.

Also, and this is the main point, for whatever reasons the political authorities, from the European Union down to each regional government, including the university admin teams, are upholding an assessment system that benefits the major academic publishers. We are assessed on the basis of their impact and reputation not of ours and, one way or the other, we have ended up working not quite for the good of knowledge but mainly for the benefit of our publishers. Let me give you an example of things that are beginning to scare me very much: I was planning to reuse a chapter that I wrote for a collective volume issued by a very well-known academic publisher in a monograph for another publishing house; I found out, however, that I was expected to pay 1000$ to get their permission. Needless to say, I’m writing a completely new chapter for the monograph. A doubt now corroding me is whether I can use the arguments without repeating my own text verbatim, for I’m not even sure that I can. What exactly do we give away with copyright?

Concerned specifically about the Journal Impact Factor, The American Society for Cell Biology (ASCB) published in 2012 a document known as the “San Francisco Declaration on Research Assessment” (https://sfdora.org/read/). JIF, a product of Thomson Reuters now published by Clarivate Analytics, is being used to measure academic CVs at all levels and beyond the USA. Incidentally: Clarivate Analytics is owned by the Onex Corporation (a Canada-based private equity or investment fund) and by the London-based Barings Bank, now in the hands of ING. Draw your own conclusions. Anyway, the San Francisco Declaration couldn’t be clearer: its general recommendation is “Do not use journal-based metrics, such as Journal Impact Factors, as a surrogate measure of the quality of individual research articles, to assess an individual scientist’s contributions, or in hiring, promotion, or funding decisions”. What should you use, then?: “assessments based on scientific content rather than publication metrics”. As an alternative, Altmetrics is proposed (http://altmetrics.org/tools/). For the British view of the matter, see James Wildon’s article which, among other matters, announces the establishment of the UK Forum for Responsible Research Metrics (https://www.theguardian.com/science/political-science/2018/jul/10/has-the-tide-turned-towards-responsible-metrics-in-research).

I cannot find (sorry) another article in which a scientist working on an emerging field (possibly big data) explained that although researchers have organized themselves competently through open-access networks and publications, a major publisher has announced the launch of a journal specializing in their little patch of the academic quilt. This researcher was positively furious at what he regarded as an unwanted interference. I seem to recall that a number of the leading researchers in his field have signed a manifesto vowing not to publish in that corporate-owned journal but the question, obviously, is whether they will be able to stick to their resolve, or risk being pushed out of the fierce competition for funding and jobs by those who publish in the new journal.

Let me explain something I am doing. Since early 2017 I have been co-editor of the online journal Hélice (www.revistahelice.com), which specializes in science fiction. My co-editors, Mariano Martín and Mikel Peregrina, and myself had the intention of transforming the journal, originally founded by Asociación Xatafi in 2007, into a proper academic publication. Hélice certainly is an academic publication because we three are scholars and we publish in it scholarly work. What I mean is that we intended to introduce peer reviewing and bibliometrics into Hélice, and publish through my university’s online platform. We have decided, however, to post-pone indefinitely the decision for several reasons: one is that we do enjoy being editors in the classic style of many SF-related publications; another is that we are publishing work by rooky undergrad researchers not necessarily interested in an academic career; also, that we simply don’t have sufficient time to meet the demands of a university-endorsed journal. This may change in the future but we find ourselves interested in filling in the gap between fandom and academia, and in doing that beyond what counts or not academically speaking. And we need not worry about any major academic publisher wanting to steal the limelight from us. Perhaps we’re being Quixotic but, then, why not?

I think I am calling everyone to change the way we make research available. Establish your own online resources though blogs and websites, question your university’s investment into expensive subscriptions rather than full-time jobs, cite colleagues’ work because you find it relevant not because it is published in A-list journals, use peer reviewing wisely but also welcome other editorial approaches, don’t let yourself be consumed by your CV, that hungry monster. I personally know that I’m doing my most important academic work here in this blog yet, you see?, I have never counted who is reading it and whether it has an impact or not at all. It adds, by the way, zero points to my CV.

Some might think that doing academic work at any level which is not officially measurable is a waste of time but, believe me, though I feel enormous satisfaction when I see my academic work in relevant publications, I also feel much happiness when working outside the rather inflexible lines of current academia. That the words ‘inflexible’ and ‘academia’ may appear in the same sentence gives me, and should give you, much cause for concern. Let’s vindicate common sense instead and re-imagine how we approach reputation.

I publish a new post every Tuesday (for updates follow @SaraMartinUAB). Comments are very welcome! Download the yearly volumes from: http://ddd.uab.cat/record/116328. My web: http://gent.uab.cat/saramartinalegre/

Leave a Reply

Si us plau, demostra que no ets un robot * Time limit is exhausted. Please reload the CAPTCHA.