juny 24th, 2016

This is a time-capsule post, of the kind that gets written with the author expecting to check in five-years time what really happened.

Like many people all over the world–as shown by the instantaneous collapse of the stock market–I expected Britons to have voted in favour of staying in the European Union. This is a people known (unless until today) for their pragmatism and common sense. Clearly, though, many things have gone the wrong way and we’re witnessing today, with a sour taste in our mouths, the success of the Brexit campaign. 24 June 2016 has been hailed by The Sun and by UKIP leader Nigel Farage as ‘independence day’… You know that something is bad when, in addition to this absurdity, Donald Trump claims that this is good (for the time capsule: Trump, who failed to be elected President of the United States by a landslide in 2016). ‘May you live in interesting times’, the Chinese curse goes.

I have titled this post ‘The incredibly shrinking United Kingdom’ because I see the UK even further diminished in its global position by this strange manoeuvre. Let me get a couple of ideas out of the way before I continue. To begin with, someone should change the rules of referendums, for in the end the percentual difference between the two options has been quite small: 52% to 48%. This is not a clear victory but a divided country. Now, if Brexit goes horribly wrong and throws the UK into the waste-basket of History (a phrase often used in Catalonia in the last year), can the overruled 48% demand any responsibility from the other 52%? Obviously not. This is why this kind of potentially very dangerous decision should be made by a much wider difference, at least 65/35. You need to be sure that your victory (or defeat) is final and this is not at all the case today. Second point: Brexit is, no doubt about it, a clear sign of the European Union’s failure to constitute itself as little more than a commercial union. I cannot imagine Donald Trump celebrating the secession of, say, California; the fact that he’s toasting today to Brexit means that the EU is not at all a union, as the United States are. I would not like as a Catalan to be left outside the EU, which is one of my main doubts considering a possible independent Catalonia. At the same time, the EU has utterly and completely failed to inspire in us, Europeans, the feeling that this is what we are, in the same way Californians feel that they are American.

I’ll add a third point: that the UK leaves the EU is particularly poignant because, of course, it was one of its founding members back in 1957, when it was born as the European Economic Community–a name bearing all the seeds of trouble to come. British disaffection for Europe is an extremely complex issue, which many others have analyzed with better, finer tools. Nonetheless, this disaffection has its roots in the perception that the UK is contributing more than it is getting out of the EU; European solidarity is based, after all, on the idea that the richer states must help the poorer ones, which is why Brexit will certainly be a terrible blow for us, in Spain. Now, this suggests that Germany should be happier than any other nation to abandon the EU but the Germans do see that the union is needed if only because, let’s be clear about this, cheap labour is to be found in its southern and eastern areas. The Britons are right now too blinded by an oddly euphoric chauvinism that won’t let them see that European migration is not the problem but the solution to their economy. I’m aware that much of Brexit has to do with Britain’s wish to decide for herself which migrants to admits to its shores but the vision of an all-British workforce is not only treacherous but also downright silly.

If we accept the argument that staying in the EU brings more economic benefits than staying out of it –and I think this is a powerful argument because the previous arrangement of nations in Europe led to WWI and WWII– then we need to wonder what is being pursued with Brexit. It is not impossible to think of a future scenario in which Scotland will be an independent nation and a member of EU, and in which Northern Ireland might be unified with Ireland for the same reason (or Gibraltar decide to return to Spain). There is, then, a very real danger of national dismemberment with even England/Wales being sharply split between pro-EU London and the rest. How British (English?) economy can thrive even supposing the UK’s split is prevented is beyond me. Norway is doing fine on its own without being a EU member (and so is Switzerland). However, part of their success has to do with their being very realistic about which role in the world they want to play: a marginal one (at least politically, I would not say the same about Switzerland and world finances). Oddly, Brexit supporters dream of a Britain which is not only free from EU restrictions (so they claim) but also a powerful nation in the world. Like in Victorian times.

This is the way in which Britain is shrinking: it has lost track of its dwindling importance both within Europe and in the world. In Spain we’ve gone through that: we used to be the biggest Empire in the world, remember? Being rich didn’t suit our (or rather, the Castilian) temper very well, which is why we went downhill all the way into bankruptcy and even invasion by Napoleon. A series of independentist uprisings eroded little by little the Empire until this was finished off in 1898 by the United States pushing us out of Cuba. The Civil War (1936-9) happened when the Republic was getting Spain used to the idea that we should be a modern European country rather than an ex-Empire. Yet the band of ultra right-wing nostalgics headed by Franco fought its way into what the Brexit campaigners now want: autarchy (or total self-rule). I do know that the parallelism between backward, isolated Spain, which only joined the EU exactly 30 years ago, and Britain does not hold. Yet the lesson we learned after Franco is that imperial glory will not feed people; we very humbly accepted the crumbles at the table of the rich EU, briefly believing before the 2008 crisis that we were finally one of the diners. The UK went through its worst economic crisis back in the 1970s and that 52% who have voted ‘out’ today seem to feel confident enough that, no matter what, they will stand on their feet and do fantastically well in terms of economics, politics and general prestige. As an English Studies specialist I can only call this position neo-Victorian.

This is, naturally, an extremely false position to be in. Whereas those who want to stay in the EU have given a long list of reasons why leaving it would be negative, the Brexit campaigners have given no truly valid reason to leave the EU, other than wounded pride. Most likely, they imagined a Europe in which the UK would be the leading country and cannot simply accept that the leader is Germany, the hated enemy of the past. Somehow, they have managed to convince themselves that the United States will play a crucial role in this post-Brexit British Renaissance, even though President Obama warned Britons against Brexit. As a Catalan I feel that the path taken is even much more uncertain than independence. Supposing the Scots voted to leave the UK, they should be doing so in the hopes that they would do much better on their own, including the possibility of joining the EU. But the UK has not voted for independence, no matter what UKIP says, but for isolation, which is a completely different matter. Britain was isolated in Victorian times, in the sense that it did not belong to any other international association, and was extremely powerful. Now the same solitary status wants to be recovered. But, then, what’s next? Leaving the United Nations?

I’m flabbergasted–that’s the word I was looking for. I simply don’t understand how a civilized nation can make this very obvious (right-wing) mistake in the 21st century. It must be the influence of so much SF but I always imagined the world converging eventually into a world-wide federation, Star-Trek style. What I wonder today is not why the Britons (well, 52% of the 75% voters, that is to say, 34% of Britons over 18) want to leave the UK but where they think they are going.

Among the many questions about the future of the EU I heard this morning on TV, here’s the one closest home, as an English Studies academic: will English still be an official European language? That’s a good one… Everyone, start learning French and German as fast as you can…

Comments are very welcome! (Thanks!) Just remember that I check them for spam; it might take a few days for yours to be available. Follow on Twitter the blog updates: @SaraMartinUAB. You may download the yearly volumes from See my publications and activities on my personal web


juny 21st, 2016

It’s June and these days we’re also busy marking exams. We’re also busy wondering why we give our students exams and what use they are (the exams, not the students!). What use assessment is, in fact. I have just entered the final marks for the course I have taught this semester and they are exactly the same marks I would have awarded each student one week after meeting them. Funnily, their marks did not depend on just a final exam but on four different items, with their corresponding percentages, etc, all that requiring Excel to be worked out… I don’t know what this says: that assessment only validates subjective impressions, that assessment that does not rate the exercises but the person, that I am such an experienced teacher that I know at first sight how students will perform (ehem!), that I could have saved myself a lot of hard work… Take your pick.

This was an elective course and I always prefer for assessing this type of course a paper rather than exams. This time, however, I decided to use exams for assessment, apart from a short essay written at home and a class presentation. I hated exams as a student and do not particularly like them as a teacher. One of my colleagues claims that we should never ask students to write papers, for they plagiarize all the time–which is an exaggeration… though also a constant fact in teachers’ lives. My position is quite the opposite: I do not see any equivalent situation in real life in which people would have to write a piece of academic work in a tightly limited time. I associate this, rather, with journalism and newspaper’s daily deadlines. Otherwise, why would anyone produce a piece on Wordsworth’s poetry, to name the first case that comes to my mind, in a very short time? It’s simply ridiculous. I’m rather of the persuasion, then, that exams only measure people’s ability to take exams. Or negative ability–I always performed well but only after bouts of nausea and vomiting that did nothing for my faith in the use of exams. I must have passed the last one in my doctoral days (I’m not counting vivas or oral examinations for tenure) and that surely was a happy day.

Accordingly, I play all kinds of tricks if I can manage to try to deconstruct exams. I believe that good academic work requires a reasonable time of preparation (not just of cramming) and I’m known to have given my students the exam questions in advance. I don’t care very much for failing students and I find that the students who fail in my courses usually trip themselves up by not handing in exercises or not taking the exams. If I get the chance, however, to help my students to do well enough for me to pass them, I’m happy. This is not the same as saying that no matter how they perform I’ll pass them, but rather that I don’t want anyone vomiting before taking one of my exams. I just want them to have studied and, above all, to have planned their exam question at home. I found out, however, that students given the exam questions in advance got quite nervous for a reason I failed to anticipate: if you know the questions in advance then a good deal of the justification to fail vanishes. Who would have thought that Prof. Martín would ask such a devious question? That seems to be the kind of thinking that comforts students that do poorly. Now, if Prof. Martín puts her questions in your hands, thus eliminating the surprise factor (not necessarily the deviousness), that’s another matter. Your inability to plan the answers is highlighted, which is, let’s be honest, much more embarrassing that simply being unable to answer a question you could never have anticipated.

You might argue that surprise is the whole point of exams and the target that collective groan you can hear when students find the questions too hard. This just happens to be a sound I do not enjoy (my exams, in contrast, seem to be the source of much sighing…). Anyway, what happened when I gave students an exam to take home, consider and plan is that still a few failed. I certainly felt less responsible for their failing, if you know what I mean. What I noticed was that the effort done in the actual writing in class was similar: the same stream of sighs, the same flushed faces and always the lack of time (some students would run out of time even if given five hours instead of two, it seems). The pressure had eased, the quality increased, hopefully there had been no vomiting, but, then, it was still an exam written by Prof. Martín.

For my latest course, I have tried another tactic: have the students write their own exam. In hindsight, I realize that no exam questions could ever match the deviousness of this proposal but let me say that I was not acting wickedly but in good faith. The group was small, only 15 students, and I explained that they should write a two-question exam using our habitual Department format: select a passage from the book we’ve studied (maximum 10 lines) and ask a question that can be developed in a short argumentative essay (maximum 500 words), referring both to the passage and to the book in question. It was clear to me that students would be very uncomfortable if I didn’t check their questions, so I gave myself the task of validating each exam a few days before the corresponding exam date. What I found is that students wrote, on the whole, perfectly valid questions just badly phrased. Some of the questions were simply too big in scope for a short essay but could mostly be re-used; others came multiplied by two or three (students seemed insecure about which version to use). None of the questions was insultingly easy to answer, and here’s where I noticed my own deviousness.

Imagine my students telling their peers in other courses ‘Sara has allowed us to write our own exam questions’. The answer from said peers should be ‘My!, you’re lucky, now you can’t fail!’. Now, most of my students probably replied at this point what one of them told me: ‘No way! This is the hardest thing I’ve done in my life!’ Why? Because they quickly realized that my proposal to let them write their own exams would not result in easier exams–no way I would validate shallow questions. Therefore, they had a twofold task: produce the kind of exam I would write myself and do it so that they could secure a pass. Lacking feedback from them (I have asked, and I’m waiting) I can only surmise that they told themselves: ‘Ok, so I need to make things as easy as possible for me while writing the exam as if I were a teacher’, demanding Sara Martín, in particular. Sure–only I had not gone that far in my own thinking when I proposed the experiment.

Exam one went well: nobody failed, though I believe that nobody performed either at a higher level than if I had written the exam myself. My guess (I need the feedback) is that students were more relaxed and confident about what they were doing, having got the annoying surprise element out of the equation. My colleagues say that written exams have the added bonus of offering exact information about the actual command of English each student has. Maybe. By giving students the questions or asking them to write their own questions, I also expect them to work on their English at home and produce far more polished exams (much easier to correct for me, too!). I don’t know how they do this: if it were up to me, I would write the answers at home (using the dictionary, etc, etc), try to memorize as much as I could and then write them in class. It might well be, however, that they have memorized outlines, I don’t know. I’m sure, though, that many language doubts and errors could be ironed out at home. This is fine by me for in this way they had to learn some English language in order to prepare the exam, in addition to the English Literature.

For the second exam I asked students to produce questions combining both a passage from the primary source and from a secondary source. I’m not sure whether this was my fault or not, but it seems that my instructions were a bit ambiguous about which secondary source to use. I had asked students to read one article for each novel and I expected they would use the ones I had selected for them. However, some students just chose other articles, which was a bit complicated to negotiate. I validated their exams eventually. What I found, and this was both silly and funny, was that the most complicated thing to do was validating the exams in which my own article was quoted. I sensed a kind of mutual embarrassment: students seemed to feel a bit awkward writing ‘As Martín claims’, which was not the case when they wrote ‘As Vint claims’, or ‘As Frelik claims’, for they didn’t know these academics personally. On my side, I found myself disagreeing with how students read my own article, even though their questions were perfectly valid. It felt very, very strange to be ‘Martín’ rather than ‘Sara’, as my students call me in our informal Department.

In both exams the contents reflected very accurately was what being discussed in class. The questions did refer with no significant exceptions to the issues we had discussed together though the passages chosen were not necessarily the ones I had selected for class discussion. The exams were, in short, more personal and less ‘parasitical’ of class discussion than I expected (this was my main fear). Some exams, particularly in the second series, were actually quite sophisticated. When marking them, I often marvelled that students who knew nothing about SF a few months back were confidently discussing artificial intelligence, genetic manipulation or post-humanism. Happy, then, as far as I’m concerned.

I failed, however, in just one thing. I decided to use exams because I wanted students to read the five novels in the course–if they wrote a paper, then they might read just one or two. What I failed to notice is that the second exam, covering three novels, should be much longer, perhaps two hours and a half, rather than one and a half. And we simply don’t have that kind of time. In old times exams had a separate schedule, apart from teaching time. One of my own teachers was famous for giving us exams that could run for four hours or more. Since 2009, however, exams are part of our teaching time, which means that the more exams you introduce the less time you have for actual teaching; also that they need to fit our 90-minute slots. Either I introduced a third exam, or I let students choose two of the three novels for the second exam, which is what I finally did.

Was the experiment worth carrying out, then? Certainly. I think that the class size was ideal, as validating the questions–not an easy task!–could be done in a reasonably short time. Also, these were fourth year students. I don’t see myself repeating the experiment with second-year students in my Victorian Literature course because a) they would panic, b) at about 50, the group s too big and validating the exams would consume too much time and energy. I believe that my experiment in tailor-made examination shows that asking everyone the same question is a bit of an absurdity for each student is motivated by different things in the same book. What we do when we ask all students in our class the same question, then, is just something convenient. We tend to ignore the fact that, beyond what each student has studied in preparation for the exam, some will automatically do well (or badly) because of the nature of the particular questions. I’m talking about Literature, not mathematics, of course…

Now, I would be really glad to get feedback from my students…

Comments are very welcome! (Thanks!) Just remember that I check them for spam; it might take a few days for yours to be available. Follow on Twitter the blog updates: @SaraMartinUAB. You may download the yearly volumes from See my publications and activities on my personal web


juny 11th, 2016

I assumed that there would be already a handful of academic articles on Gillian Flynn’s 2012 best-selling novel Gone Girl, adapted for the screen by David Fincher in 2014 from a script by the author herself. Not at all. My university’s meta-searcher, Trobador, has returned 712 results, only 2 of which appear in the MLA database–both turned out to be journalistic pieces (a third piece is a biography of Flynn). Cultural Studies has always been accused of being merely academic journalism, too concerned with the contemporary–with the transient and the banal in the worst cases. I’m sure that pieces about the Spice Girls read now as terribly dated and as obscure as the analysis of a 16th century villanelle, yet I do believe that academics should react promptly to their surroundings and I’m quite surprised that Gone Girl has escaped our collective radar. Perhaps it’s just too early and a flood of articles on Flynn’s novel are now going through the sluggish process of peer reviewing… After all, I decided to read this novel (not quite my cup of tea…) only after listening to my UB colleague Cristina Alsina deliver an excellent paper during a recent seminar on crime fiction and the family, so there you are.

June 2016 is, of course, too late to review a book published four years ago and this is not what I intend to do here: I wish, rather, to consider why the trope of love and marriage is receiving such a degrading treatment in contemporary fiction. For this is the case in Gone Girl to an extent that is simply painful to read.

One of the best books I have read on romance is David Shumway’s Modern Love: Romance, Intimacy, and the Marriage Crisis (2003), which I discovered only after the author finished a three-month stay in my Department, during which I found it impossible to connect with him at all… Shumway explains with great lucidity that what is destroying long relationships–at least in US society–is women’s constant demand for total intimacy, inspired by the traditional romantic discourse originating in fiction. This may sound misogynistic in my simplified summary but it is not at all: Shumway is also adamant about American men’s inability to respond to that demand for intimacy if only at a very basic level.

The main contribution, I believe, that Gone Girl makes to the fictional representation of love and marriage is turning intimacy into the most brutal form of hell one can imagine, a life sentence. The romantic ideology that women have incorporated into their relationships with men, Flynn argues, is fundamentally psychopathic, as her heroine Amy proves; men, like Amy’s average husband Nick, try desperately to avoid needy women for, understandably, the stress of responding to a constant demand to be loved and to be fully understood is impossible to sustain. The joke in Gone Girl, however, is that Nick tries to escape his wife’s extreme ideas of what constitutes a successful marriage only to find that his young mistress is as needy as Amy (though, of course, sexually more pliant). Ironically, Flynn does give Nick a perfect relationship with a woman, his twin Margo, perhaps suggesting that being siblings (free from “twincest”) is preferable for men and women to being married.

If you pare it down to a basic plot line, Gone Girl deals with the extreme measures to which a wife may resource in order to keep her cheating husband. Divorce is simply impossible because neither Nick nor Amy can go back to being the autonomous person they were before they met (if that’s what they were). Second lesson about modern love, then: marriage annuls the capacity to be yourself, as many recently divorced people find out. Gone Girl is a horrifying novel, then, not just for how far Amy takes her radically sick romantic ideology (and for how Nick eventually responds to it), but for what it tells about marriage, particularly in American life and fiction. The acknowledgements turn Gone Girl, besides, into an even more bizarre product if that were possible, since Flynn thanks with total enthusiasm her husband for his support (and for having married her!). If I were her husband I would, however, worry: Flynn’s grim novel was intended, as the author declared, to make couples consider each other with suspicion and wonder ‘who are you?’ Now: how does this connect with the author’s own happy marriage, I wonder?

Why, then, the insistence on the marriage plot if all couples we know are actually unhappy? There seems to be a kind of circularity at work: the romantic idea of the happy marriage for life was constructed by the first novels and now the novel is deconstructing it. This, of course, is based on a misreading of the original fictional romance. Take Pride and Prejudice (1813) and you will see that Elizabeth and Darcy are surrounded by unhappy married couples, beginning with her parents, the Bennets. Actually Austen plays a peculiar conjuring trick by placing the disagreements that eventually surface in most marriages at the beginning of the relationship between her heroine and hero. The fantasy of the happy-ever-after ending consists of supposing that in some magic way Elizabeth will avoid the pitfalls of her parents’ union as she’s marrying a man who shares her own idea of intimacy. In the 2005 film version of Pride and Prejudice with Keira Knightley as Elizabeth, English actress Rosamund Pike plays her demure elder sister Jane, a spider-woman patiently spinning her web to catch rich bachelor Charles Bingley (he and not Darcy is the “single man in possession of a good fortune” and “in want of a wife”). Pike also plays the patrician Amy Dunne in Fincher’s adaptation, which links Austen and Flynn very conveniently for my argumentation here. Jane and Amy, after all, are not so different in wanting a nice husband but the intervening 200 years have turned this aspiration into something aberrant.

The pathology of the good marriage is extended in Gone Girl to Amy’s parents, a couple of soul mates, as Amy describes them, who have cannibalized their daughter’s childhood as material for a successful series of children’s books, Amazing Amy (later, Amy calls her mendacious memoirs Amazing). Rand and Maryelizabeth Elliot appear to be truly committed to each other but also quite phony, no doubt because Amy, an only child, hates them for exploiting her economically and emotionally. If, then, the happy, long-lasting marriage is, as Amy claims, damaging for the children because it sets high romantic standards impossible to fulfil, than what should be the target for couples? Since Flynn’s own happy private life contrasts so sharply with that of her heroine Amy, or so she claims, perhaps what we are witnessing is not so much the degradation of the marriage ideal, as it exists in actual social practice, but the inability of current (American?) fiction to narrate happiness. Even worse, rather than plain unhappiness women novelists are offering a monstrously false form of happiness, twisted beyond all recognition. On the other side of the Atlantic, incidentally, Glynn’s British peer E.L. James has refashioned happiness as sado-masochism in her Grey series, perhaps concurring more than we imagine with her American colleague.

Initially, I was going to use Michael Kimmel’s controversial Guyland: The Perilous World Where Boys Become Men (2009) to comment on the current degradation of the romantic discourse. The need for mutual seduction, Kimmel hints, is vanishing, replaced by a predatory view of sex common among American young persons aged 18 to 25. I realized, however, that this is not yet the generation that Flynn is addressing but rather that of the thirtysomethings in need of abandoning guyland (and girlland). Flynn is suggesting in Gone Girl that marriage has become a fiction which both members of the couple embrace when the hedonistic lifestyle of the twentysomethings runs its course and individuals start feeling a vague need to settle down. The biggest gap in her novel is not connected, ultimately, with Amy’s improbable masterminding of her elaborate final trap for Nick but with how and why Amy and Nick fall into the marriage trap of their own volition.

Amy offers quite a good diagnosis regarding Nick’s self-deception: he falls for the ‘cool girl’ which she so proficiently impersonates. Yet since plain Nick is so easy to read for Amy, I wonder why she targets him as the object of the obsessive marriage plot which she builds for both. In Flynn’s reading, indeed, marriage is a piece of fiction which we women write throughout our lives with men in secondary roles and if Amy is exceptional this is because she writes two versions: the one her diary captures, intended paradoxically for public consumption, and the private one she forces on Nick. Since Flynn makes Amy so exceptionally abusive, readers–like myself–who resist reading women’s fiction about violent women may miss her main point: the idea that men are also addicted to the trashy marriage plot that women churn out (Fincher’s film, in contrast, simply places Amy in the long line of dangerous blondes, a classic femme fatale, though married).

A reader called Gone Girl the story of “a jerk and a bitch” and even though Mary Elizabeth Braddon’s sensation masterpiece Lady Audley’s Secret (1862) is there to remind us that mean, scheming wives have not been invented by Flynn, it is still shocking to see that Amy and Nick are presented as an extreme instance of modern love. Try, if you can, to call Pride and Prejudice the story of “a jerk and a bitch” and you will understand at once what I mean by the contemporary degradation of the romantic discourse on love and marriage. Or to put it the other way round: while I very much dislike Austen’s novel as a dangerous fantasy about men’s willing submission to women through love, I am appalled by Flynn’s nasty little tale for, instead of denying Austen’s daydream it claims that it works–on the basis of the most atrocious mutual dependence you may imagine. Never mind that this is not love at all as it should be felt, for it is love as we have been told it should feel by countless romantic novels.

Not the girl but love is gone, burnt out by the cynicism dictating that sentimentalism is tacky. Amy never tricks herself that she and Nick are like Elizabeth and Darcy yet she builds her own repulsive marriage plot because she does want that fiction to be her life (and Nick’s). This is both cynical and desperate, a decision born of her realization that modern love is hollow at the core and intimacy two-edged, for there are things you might not want to know about your spouse. Is Gone Girl, then, a good novel that will stand the test of time like Pride and Prejudice? Not at all but it is fascinating as a symptom of the malaise that is rotting the marriage plot in fiction and in life from the inside (no wonder that Flynn’s background is also the economic decadence of Amy and Nick’s America).

This malaise is not, however, as Flynn might believe, limited to her pathetic couple: her novel participates of the terrifyingly decadent imagination surfacing all over American fiction (think The Hunger Games). What made me gasp for fresh air when I closed the book was not Amy’s wickedness but the sheer ugliness of Flynn’s fabulation, the hours spent plotting the hideous details of how Amy plots her own life. Bret Easton Ellis did the same, and arguably even with more bravado, for contemporary American men 25 years ago in American Psycho (1991). It has taken American fiction, then, a quarter of a century to present us with the female version. To many people this might read as a healthy exercise in female sincerity, even as a feminist step forward. For me, however, the decision to focus on a psychopathic woman as an instance of the psychopathologies of a whole society is a setback, for I still believe that for a society to progress its fiction needs to provide it with role models. The kind of urge that Amy, the anti-role model, satisfies is a luxury that women cannot afford in our patriarchal times–I know the argument is not new but think of Hillary Clinton and now think of how little she needs Amy Dunne to exist.

I’ll end, then, by bemoaning the way love has been dealt with in fiction: it deserves better than the implausible plotting that both Jane Austen and Gillian Flynn give it for drastically different reasons. I wish both Elizabeth Bennet and Amy Dunne were gone girls but I’m sorry to see that only love is gone.

Comments are very welcome! (Thanks!) Just remember that I check them for spam; it might take a few days for yours to be available. Follow on Twitter the blog updates: @SaraMartinUAB. You may download the yearly volumes from See my publications and activities on my personal web


juny 6th, 2016

I published a post back on 26 April in which I quoted from an interview with American neurologist Alice Weaver Flaherty, author of the book The Midnight Disease (2004), an essay on neurology and literary creativity. I have read now her volume and although I do not wish to offer here a formal review I certainly want to consider (or re-consider) a few ideas based on Flaherty’s claims. I do not hesitate to recommend The Midnight Disease not so much for the rotundity of its arguments but for their many flaws, as they offer plenty of food for thought.

By the way, Flaherty finds it necessary to justify why she has written a book despite being a scientist, since her colleagues communicate with each other by publishing papers. “A melancholy fact,” she writes, “is that in the sciences, the book has become as marginal a literary form as the sestina or the villanelle”. Torn between her impression that academic books will soon disappear under pressure of online paper publication and her need to narrate “an unusual personal experience” (the sad death of her two premature twin boys), Flaherty tells supercilious scientists that “writing this book was something I could not stop myself from doing”.

Flaherty, who has always written, went through a very serious post-partum depression which manifested itself, among other symptoms, through ‘hypergraphia’, “the medical term for an overpowering desire to write”. This, she explains, is due habitually to alterations in particular brain areas and overlaps only partly with ‘graphomania’, or “the desire to be published”. Hypergraphia, she speculates, seems connected with the temporal lobes, the brain areas in charge of facilitating our understanding of meaning. Many hypergraphic patients appear to have suffered temporal lobe epilepsy.

It is important to clarify that hypergraphic writers are dominated by a mania for writing, by an unstoppable drive to scribble, no matter what the results are in terms of quality for, remember, this is a pathology. The ‘problem’, as noted in my April post, is that this is a condition for which sufferers demand no treatment, as they derive pleasure from writing. If you’re reading this and thinking ‘oh, well, I am certainly not at risk of being labelled hypergraphic’ you should be aware that many of us, readers, appear to be hyperlexic. Do you belong to the “subset of avid readers whose reading has an especially compulsive quality”? Do you need a book to prevent you from reading “the newspaper used to wrap the fish”? There you are: you’re hyperlexic –the proud owner of a brain in thrall to an unruly bunch of print-mad neurons. I can see the t-shirt: ‘Hyperlexia rules!’

Flaherty’s sweeping statement that “A surprising proportion of writers are manic-depressive”, is open to all kinds of jokes (‘no wonder they’re depressed seeing the state of the book market’… and so on). Surely, you can see for yourself that a) not ALL writers are manic-depressive (or have epileptic temporal lobe seizures like Dostoevsky), b) not all manic-depressives become published writers and c) if this were the case, creative writing courses should start by plunging their students into deep misery at once. An additional problem that Flaherty simply hints at is whether writer’s block, presented as a mental condition treatable with the right combination of pills and therapy, “may be culturally determined”. The phrase ‘writer’s block’, Flaherty explains, was coined by American psychiatrist Edmund Bergler and although many writers from other nations suffer from block, “there is a paradoxical sense in which suffering from writer’s block is necessary to be an American writer”. Flaherty names Russian-born, hypergraphic (=absurdly prolific) Isaac Asimov as an interesting exception but she seems confused by him; her list of writers “contains few genre writers because of the convention that genre writing isn’t quite writing”. It’s just hypergraphia, you know?

Funnily, although I intended to keep the tone of this post as straight-faced as possible my repressed sneering is surfacing throughout… Perhaps this is because I’m scared that Flaherty is right in her main claim: that the mind has a material basis in the brain; hence, alterations in the brain result in abnormalities regarding the average mind. Basically, she speculates that the passion for writing and reading might fall within the gray area of the brain alterations that, while not being pathological, are uncommon and even exceptional (abnormal?). We write and read with glee because, in short, we have funny temporal lobes that connect in a funny way with our limbic system. She may be making a totally valid point: if Usain Bolt’s body is worth studying for what it says about the abilities of record-breaking athletes, then perhaps Toni Morrison’s talent as a writer stems from the subtle chemistry of her brain. As Flaherty writes, “By scanning people thinking creatively (with the usual caveat that judging creativity is difficult), researchers may soon be able to see which patterns of brain activity underlie creativity”.

Flaherty softens the impact of her chilling scientific claims by stressing that “literature can also help us to understand science, the way it is both driven and sometimes misdirected by metaphors and emotion”. No doubt. Her arguments, however, are distressing (I can’t find another word). A point Flaherty stresses is that medication is advanced enough so that, for instance, bereaved people need not go through the intense pain of their grief by simply taking the corresponding helpful little pill. She understands why many grieving individuals reject this chemical aid, believing that lessening the intensity of grief amounts to betraying their lost beloved. To be clear about this: Flaherty claims that the more we know about our brain the better our chances will be to control emotion and mood. Like many others, I resist this idea because taking pills is for me too closely connected with taking illegal substances but, then, most people get by in this way (read Roberto Saviano’s analysis of cocaine consumption in Zero, zero, zero…). Yet, going through a very black mood this week I caught myself thinking, ‘oh, boy, my temporal lobe is misbehaving, I wish I had a little blue pill’ to go on (happily) marking exams.

How does this connect with literary creativity? Patricia Highsmith once said that writers’ favourite drug is coffee and, of course, there is a long list of literary and non-literary authors controlled by their chosen or unchosen addiction. In Flaherty’s book writers are a bundle of brain and mind irregularities, as you can see, which ultimately begs the question of whether we prefer, as a society, happy individuals or unhappy authors. That’s the only conclusion I can reach after reading her book since the well-adjusted, happy author seems not to exist in her vision of literary creativity. I wonder whether this is why literary biography always insists on presenting literary genius as practically a pathology (yes, I’ve been reading Claire Tomalin’s biography of bipolar, manic, hypergraphic Dickens). At least this is a pathology we admire.

As I read The Midnight Disease something else bothered me: the future of education. Education works on the principle that all children should start at the same point and be taught a little of everything, regardless of their abilities and preferences. Little by little, each child navigates their way into being an engineer or a star piano player (supply your own worst-case scenario). Primary and secondary education are, thus, a compound effort to teach children a common minimum denominator and to find out which particular abilities each child has. Now imagine a near future in which we will be able to scan the brain of a four-year-old while engaged in creative play and determine how his/her brain conditions his/her mind. This imaginary brain scan would have detected, for instance, my hyperlexia (‘wow, this one is a Literature teacher!’) and my limited ability to imagine space (‘no stage designer, this one’). Flaherty never says that she wants to see this implemented. However, her view that our minds are our brains implicitly suggests that we will be eventually classified in this way, just as we will be soon classified according to our genetic make-up. Pass me the happiness pill…

From an extreme, alternative point of view one might argue that education works poorly precisely because we wrongly insist on the egalitarian approach. A timely brain scan would save the little ones many painful hours of mathematics or of English soon to be forgotten –which sounds tempting– and place the children with the most promising creative abilities on the fast track to… what exactly?? We are already hearing so much cant about the so-called ‘exceptionally gifted’ children that I shudder at what the further exploration of the human brain can do to human minds.

Clearly, neurology can help us to overcome the accidents of life caused by malfunctioning brains (and it’s impressive to learn the myriad odd ways in which brains malfunction). Nonetheless, it may be overstepping its boundaries– like all medicine today, with its suspicious endless pressure to connect good health with joining expensive gyms when you’re young and with taking absurd amounts of prescription drugs as you age. There is, however, a fundamental difference between, say, correcting the ravages of diabetes and forcing literary creativity into a sort of medical freak show.

There are also other dangers: if my students learn that I’m hyperlexic (am I?… show me that brain scan), then they may reject my preaching in favour of non-stop reading on the grounds that they’re not hyperlexic themselves. Or, as the trend seems to be now, they may claim that their massive use of the social networks, the internet and videogames, has re-wired their brains in ways that my 1960s hyperlexic brain is not equipped to understand.

Pass me the little blue pill…

Comments are very welcome! (Thanks!) Just remember that I check them for spam; it might take a few days for yours to be available. Follow on Twitter the blog updates: @SaraMartinUAB. You may download the yearly volumes from See my publications and activities on my personal web


juny 3rd, 2016

[Just one sentence to say that while the activities I have been engaged in this week –exams (both oral and written), yearly doctoral interviews, last minute BA dissertation revisions– are absolutely necessary I hate how they use up the energy needed to write. With no writing (and I realize this is another sentence) it feels as if there is no point to a week, no matter how exhausting it has been… or how useful.]

Today I’m combining two items which have been waiting for attention for a while. One is an article from La Vanguardia and the other a report by the union Comisiones Obreras. I’m here interrupting myself to comment that one thing I learned while interviewing students for their oral exams this week is that students don’t read papers (which I know) but just use Twitter to check on the day’s trending topics (I guessed but didn’t know for sure). This means that, among thousands of other relevant items of information, they may have missed the two I’ll comment on. One, by the way, I found browsing the papers as I do at lunch break (I no longer read print papers… that’s for retired people as a student said); the other reached me via email, a medium that students also find obsolete and that, I’m sure, only use with us, ageing teachers.

La Vanguardia sums up the main findings by Fundació Bofill’s 125-page report Via Universitària: Ser estudiant universitari avui by Antonio Ariño Villarroya and Elena Sintes Pascual ( This report is based on a survey run among 20,512 students in the 19 universities of the Catalan-speaking regions of Spain, within the network Vives. I confess I have not read the report and I refer only to the summary.

No surprises here: families are the main contributors to the cost of educating their children which, logically, puts children from impoverished social backgrounds at a serious disadvantage regarding their better-off peers. Nothing new, then, except that a matter such as taking a year abroad within the Erasmus programme is now practically compulsory, disregarding how this widens the gap between middle-class students and their poorer peers (the grants are a joke…). The report claims that 30% students finance their studies by working, part or full time; only 0.7% of the students surveyed have fallen into the trap which student loans are turning out to be. 13% enjoy some kind of grant; they are included within the 41% of students who study full time (um, the figures do not add up, do they?). More interesting findings: Mothers are crucial–it seems that the more educated a mother is, the more they invest in the education of their children (most of these mothers were themselves new in the Spanish university in relation to their family background). The report is clear: most students (above 40%) have an upper class or upper-middle class background and college-educated parents, yet many outside this group are upwardly mobile, coming from families with no college-educated members. I have never heard, however, of middle and upper-class children taking up professional training in a blue collar trade–though there must be some measure of downward social mobility even when both parents are college- educated and/or wealthy.

The Boffill report claims that combining work and study need not affect the student’s marks, though it does affect class attendance. No student, they claim, uses more than 20 hours a week to study anyway… though I don’t know whether they mean apart from attending classes. This is, excuse me, total bullshit. Along my own university years I went from being a full-time student (with my fees funded by the Government on the basis of my marks) to being a full-time worker, as my life became complicated by my father’s total lack of interest in my university education and his constant pressure for me to work full time. I left home too early, married unwisely and found myself in the obligation of doing whatever it took to study –which, of course, meant working full time, as my father wanted. Not common, perhaps, but replace ‘married unwisely’ with ‘started sharing a flat’ and then the whole situation is not that odd. This means that in my last year I did what I could to attend classes and I suffered very much for missing them. It’s true that in my first two years, when I just worked some hours a week as a private tutor to earn myself some very necessary pocket money, I had plenty of time to spare. Yet, I put it to good use, reading, visiting exhibitions, learning all I could beyond my courses. In my last year, I simply hated my life, as I didn’t know whether I was a worker or a student. Would I have got better grades? Not necessarily. I recall, however, that time as a horrid, stressful period of my life. A student should be a student, period, and that means full time. A paid job is fine as a complement but when it starts draining away energies needed for study then it’s a serious obstacle, not an aid.

The Comisiones Obreras report shows what families and students in Spain face up regarding the cost of study. This is a study of the evolution of university fees between 2011 and 2016 (see No surprises here, either, though it’s frightening to see the actual figures. The report shows, to begin with, that Spain is among the very few countries in Europe to have responded to the 2008 crisis (which coincided with the implementation of the new BA and MA system in 2009) by steeply raising the university fees. It’s funny to see that the United Kingdom is neatly split between England/Wales/Northern Ireland, which decided to go as far as possible down this road with fees up to 9,000 pounds, and Scotland, where a university education costs the student very little. The report offers the figure of 6,460 euros as the average cost of the current 4+1 university education system in Spain, which is certainly nothing in comparison to the 54,728 euros the same costs in England; still, Spain has 4,000,000 unemployed people and one should think that state-funded free education should be the way our of that situation. The report reaches exactly that conclusion.

It is funny to see how different the tone is in the Bofill and the CCOO report: the former is descriptive of a situation contemplated with a certain scientific distance (the comment on upward social mobility discloses even a certain optimism), whereas the latter is clearly biased towards implementing better social policies regarding access to education. As usual, the more advanced European countries in this sense are the four Nordic ones: Sweden, Norway, Finland, Denmark –precisely the ones distinguished by a very different approach to social equality. Scotland is another interesting case, particularly for Catalonia, for its independentist aspirations have led to the realization that it must invest on the development of its human capital (though Scots have a serious problem in that their best educated citizens tend to migrate elsewhere).

Spain, in short, is just a disaster: we are keeping away from the university talented people by not giving enough grants and forcing the few who manage nonetheless to prove their brilliancy to migrate, thus doing rich nations like Germany and the United States the favour of benefitting from our scant public money. And what can I say about Catalonia? The price per credit in 2011 was already the highest in Spain (at 20.11 euros) and it’s now 33.52; the average 60 credits fee used to be 1,206 in 2011-12 but it’s now 2,011 euros. The second most expensive average yearly fee, that of the community of Madrid, is just 1,638. The lowest is 713 (in Galicia). I won’t even mention the fees for MAs, which have no explanation at all as the same staff is used to teach them with no extra cost added to our salaries.

It seems then clear that the 2008 crisis (still ongoing in Spain as Brussels knows and the Government wilfully denies) must have expelled many thousands from the Spanish university: those who suffered from some personal calamity like their parents or themselves losing their jobs, and those who could never afford the ever increasing university fees. The crisis, in case I have not insisted sufficiently on this here, has also done away with the full-time teaching jobs that allowed PhD candidates to complete their dissertations. And, yes, we all know that things are worse in Catalonia, for obscure political reasons that are never evident enough, whether they are national Spanish or national Catalan.

There are days when nothing makes sense. If the idea is going back to the smaller middle-class Spanish university of the 1970s, before Felipe González’s Government opened up the classroom to us, working-class children, I wish they would tell us. The same applies to the even scarier impression that perhaps the plan is shutting down the public university for good. What cannot be sustained is this constant anxiety that we’re not wanted: the students, the teachers, the research, the whole university. Why all this ill-treatment? How are we offending society?

Perhaps, just perhaps, what is feared, after all, is the downward mobility which I mentioned, for if the university is made accessible to the best students, no matter what class they come from, this means necessarily that the room at the top for the upper classes will shrink. After all, there are no good jobs for everyone with a university education, as we know, so why not make sure these are not available to working-class persons, beginning by making sure they never get the required university education?

Just an ugly thought, as who would jeopardise the future of a whole nation in this way, right?

Comments are very welcome! (Thanks!) Just remember that I check them for spam; it might take a few days for yours to be available. Follow on Twitter the blog updates: @SaraMartinUAB. You may download the yearly volumes from See my publications and activities on my personal web


maig 24th, 2016

Patologías de la realidad virtual: Cibercultura y ciencia ficción (2015, Fondo de Cultura Económica) by Teresa López-Pellisa is a necessary book. As Naief Yehya writes in the Prologue, “Cada vez es más claro que en nuestro tiempo las relaciones sentimentales con los dispositivos tecnológicos materiales o immateriales han dejado de ser una extraña perversión para volverse la nueva normalidad” (12). I’m reproducing these words here on the day when I’m meeting novelist and robotics engineer Carme Torras to start work on the English translation of her novel La mutació sentimental, an excellent SF novel which I have often mentioned here. La mutació deals, precisely, with this ‘new normality’ and warns us against the absurd sentimental attachment that we’re developing for, in this case, robots. Carme Torras’s novel is set in a near future when robots will be everybody’s domestic companions although the malaise diagnosed in it is by no means fantastic neither futuristic. Sherry Turkle, as I have also commented here, has analyzed brilliantly the strange bonds growing between children and elderly people and their robotic pets and how impossible it is to turn these bonds into something less irrational.

Teresa López-Pellisa diagnoses in her book five disorders concerning our relationship with cyberculture: “esquizofrenia nominal”, “metástasis de los simulacros”, “el síndrome del cuerpo fantasma”, “misticismo agudo” and “el síndrome de Pandora”. Before these ailments are described in detail she launches into quite a long digression about the confusing way in which we use the terminology associated with the digital domain. Following the nomenclature developed by Antonio Rodríguez de las Heras, she proposes that we correct the misuse of ‘virtual reality’. She asks us to distinguish between “espacio virtual”, “espacio digital” and “espacio real”. ‘Real space’ is more or less self-explanatory –‘more or less’ as the author herself realizes that all kinds of philosophical questions (and the Matrix trilogy…) must be left aside to accept that there is indeed a ‘natural’ space which we tread daily. In contrast, the concepts of “virtual space” and “digital space” require some radical reconfiguration of our vocabulary, for de las Heras and López-Pellisa claim that virtual space is, basically, the product of our imaginative capacities and cognitive system lodged in our brain, whereas digital space is a specific kind of virtual space generated by computers. She also asks us to refine the way we use the very concept of the digital space, distinguishing between cyberspace (i.e. digital space maintained online) and other types of digital space, not necessarily online. This reconceptualization is certainly appealing as it reminds us that our brain is a potent generator of virtual domains, both when we’re awake and, most particularly I would add, when we sleep. Yet, after three decades of using ‘virtual reality’ to actually mean ‘digital space’ it is unlikely that the vocabulary can be corrected in the short or the long term. Likewise, unless I am wrong, few digital spaces are off-line in this voraciously interconnective online world for which no digital device is off-limits.

The first section of the volume offers not only a (re)definition of virtual reality along the lines I have mentioned but also an extensive genealogy, which invites us to consider the predecessors of the 20th century technologies leading to the computer and the digital space. Beginning with Plato’s cave, López-Pellisa includes in her historical overview the invention of pictorial perspective, the diverse automata, and the many visual spectacles developed in the 19th century, including cinema. Her survey of the 20th century runs from Vannevar Bush’s Memex machine (1945) –the PC’s greatest ancestor– to augmented reality, passing through William Gibson’s Neuromancer, the SF classic that made the words ‘cyberpunk’ and ‘cyberspace’ popular all over the world in the 1980s. The impression the reader gets reading this well-informed segment is that all the names, dates and data that López-Pellisa contributes should be part of our general culture. They’re not. Alexander Graham Bell or Guglielmo Marconi are household names but Vannevar Bush is not –much less Jaron Lanier, to whom we owe the very concept of ‘virtual reality’.

At the beginning of the second part of the volume, which describes the five pathologies previously named, López-Pellisa declares unambiguously that she considers virtual reality a sick patient, though by no means a terminal one. It is her purpose, she states, to classify the diverse ailments and to make the reader aware of their existence rather than offer or demand a ‘cure’.

‘Semantic schizophrenia’, the first syndrome analyzed, refers to the imprecise, ambiguous way in which we use the vocabulary connected with computers. López-Pellisa expands in this segment on the basic warning against the misuse of the computer-related semantic field of the volume’s first part, albeit also in other directions. Thus, she refers to ‘Don Quijote’s syndrome’ (her own label) as the condition preventing the compulsive visitor to the diverse digital spaces from disconnecting. She does not mean that individuals no longer recognize the difference between reality and fantasy but that they choose digital virtuality as a refuge from reality –which offers incidentally an interesting re-reading of Alonso Quijano’s madness. The author also gently reminds us that ‘virtual reality’ does exist, if only as software in very real computers without which it would not survive.

The second syndrome, or ailment, diagnosed is the ‘metastasis of the simulacra’, a certainly unnerving terminology used to name the condition of those fictional texts which not only offer “distintos niveles de virtualización al generar diversos entornos virtuales en el texto, sino que además nos proponen mundos artificiales digitales en el marco del espacio virtual del texto literario, con realidades virtuales que configuran el discurso metadiagético en el texto” (105). The main characters, whether they are the protagonists of a story by Bioy Casares or Neo in Matrix, are disconcerted by the discovery that reality is unstable and entering metastasis with a cannibalistic alternative virtual domain. The list of examples that López-Pellisa explores is quite impressive and has the great virtue of mixing Spanish-language and anglophone texts, with examples from other languages, which is not that usual. In the case of this syndrome the author warns that although we are very far from being console cowboys needing a daily fix of cyberspace surfing, like Case in Neuromancer, there’s no need to fetishize Reality, with a capital R.

The ‘phantom body syndrome’ criticizes the radical transhuman aspiration to disconnect body and mind, supported by their claim that the organic human body can be replaced by computer hardware and also that the mind is akin to software. Following lines of thought that transhumanists call ‘bioconservative’ but that those concerned prefer to ‘moderate posthumanism’, López-Pellisa accepts our cyborg nature –already proclaimed by Donna Haraway in 1985: “Somos transhumanos ciborgianos y ciudadanos de un futuro en el que la convivencia entre lo natural y lo artificial estará tan normalizada que dejaremos de emplear estos términos como algo dicotómico” (137). She is, however, extremely critical of the radical transhumanist (or extropian) assault on the body: “Me resisto ante la afirmación de que el cuerpo está obsoleto, ya que supondría asumir la propia obsolescencia del cuerpo humano y aceptar que si el cuerpo desaparece, nos extinguiremos” (165). The fourth syndrome, ‘acute mysticism’ connects with the third one, as it merges the disembodied ideal of radical transhumanism with nebulous notions of what constitutes the soul and with a selfish longing for immortality. López-Pellisa does not hesitate to call this cultural disorder dangerously irrational and, hence, as damaging as a virus.

Finally, the section devoted to the ‘Pandora syndrome’ is, no doubt, the best one in the volume. Here the author’s own voice is most clearly heard for –and this is really the only major objection to be made– in the rest of the book her argumentation is overwhelmed by a constant barrage of citations. This is habitual in PhD dissertations and it is indeed the case that Patologías de la realidad virtual is derived from López-Pellisa’s own thesis. Yet, the heavy weight of the quotations is also to be blamed on the Spanish academic tradition, which still mistrusts the argumentative essay and in which authority is built on the basis of humbly accepting one’s low position in the hierarchy of the many predecessors.

In this segment, in contrast, the author uses her predecessors in the field to reinforce a strong feminist voice, which is very critical of men’s fantasies of female exploitation, centred on the figure of the artificial woman. The originality of her approach is that she rejects Galatea to focus on Pandora, for whereas Pygmalion lives happily with his statue turned into a compliant flesh-and-blood wife by no other than Venus, the male protagonists of the stories analyzed in this segment come to a bitter end when they try to control their rebellious Pandoras. The gamut runs from the classic tale by E.T.A. Hoffman, “The Sandman” (1817) to Craig Gillespie’s film Lars and the Real Girl (2007) among many other examples focusing on ginoids, “maquiniféminas” and virtual women. A controversial point which López-Pellisa raises is that even though all these stories present dehumanized women, they actually reflect men’s dehumanization and inability to deal with actual human peers. Misogyny, in short, backlashes to destroy its defenders.

To sum up, then, this is an absolutely recommended volume which contains in just 280 pages plenty of food for thought. Of a very necessary kind.

Comments are very welcome! (Thanks!) Just remember that I check them for spam; it might take a few days for yours to be available. Follow on Twitter the blog updates: @SaraMartinUAB. You may download the yearly volumes from See my publications and activities on my personal web


maig 17th, 2016

A couple of days after publishing my previous post, I continued the conversation about the low level of students’ participation in class with the colleagues who started it. This was, as usual, in the middle of the corridor and, taking advantage of the sudden emergence from her office of our emeritus professor I asked her what the situation was like in the 70s, when she started teaching.

This is the same professor who implanted the teaching methodology we use in our Literature classes, based on close reading and a (supposedly) lively interaction between teacher and students. Did students participate actively in class when you were a junior teacher?, I asked her. By no means, she answered vehemently: only when she prompted them and because groups were very small, under 10 students, and no one could escape her attention. She recalled fondly a class of mature students at the Universitat de Barcelona, composed mainly of women who, it seems, read avidly and were very keen on class participation. From what I gathered this was the only time throughout her long career in which the ideal matched the actual performance of students (my Harry Potter course…). To what, then, do you attribute current falling standards?, I asked. Her answer was ‘class’.

She elaborated: our students at UAB come mostly from a working-class background and, besides, from the geographical area surrounding Barcelona, which is by no means as cosmopolitan (I add) as the city itself. The emeritus professor explained that English language and Literature (or our former ‘Filología Inglesa’) used to be a middle-class degree, which totally coincides with my first impressions as an aspiring university student back in the early 1980s. The first students of this ‘Licenciatura’ I had ever seen were, believe it or not, participants in Chicho Ibáñez Serrador’s extremely popular TV contest Un, dos, tres… (season 2, 1976-78). They were, definitely, middle-class and very exotic birds to boot, individuals who could speak English in a backward Spain where the illiteracy rate was still too high. I recall from my first visit to UAB, in 1983, the many well-dressed students who got off at Sarrià from a train still divided in second and third class carriages, a distinction kept until 1991. As a working-class child attending a public secondary school placed in the middle-class neighbourhood of Sant Gervasi and with students from all ranks and areas, from blue-collar El Carmel to posh Sarrià, I was quite confused about class. I naively believed that education was the road to a middle-class life and that just by taking that train to UAB I would be one of the same kind with the students I had seen.

When my colleague and myself reminded this professor that we’re both originally working-class, she insisted that things are nonetheless different in working-class families, with less access to books and in which conversation is limited. Of course, she forgot about public libraries. I can’t remember when I got my first library card, it must have been in 1976, aged 13, a time when in Barcelona a foundation run by a bank, La Caixa, maintained the local library service (my public primary school did have a library… off limits to us, children). The Barcelona libraries are now run by a public institution, la Diputació, and children get library cards much earlier –the beautiful public library in my neighbourhood boasts indeed an excellent children’s section.

I do remember, however, feeling deep chagrin when my favourite teacher, Sara Freijido, described in class with a condescending smile (sneer?) the kind of books that could be found in a working-class home: a few illustrated volumes about the wonders of the world and volumes composed by abridged biographies published by Reader’s Digest, a handful of best-selling novels purchased most likely from Círculo de Lectores, an encyclopaedia paid in monthly instalments. Exactly that. She neglected to mention the bolsilibros or novelas de kiosco, those cheap novelettes written by Spanish authors using anglophone pennames which started my education in genre fiction. I blushed, mightily mortified, hearing my teacher expose my family to public opprobrium, or so it felt, though she clearly confused possessing books with reading books. After all, my middle-class peers in secondary school, who had access to richer home libraries, were not more active readers than I; those who read (and who kindly passed me their books) belonged to the more bohemian segment. And I mean by this one girl.

Many of my class background and generation were the first ones in our families to attend university. I would say even to dream of attending university. Our teachers played in this a major role by steering surprised, indifferent or reluctant working-class families to making the effort of educating the strange children in their midst, children who took it for granted that if you had good grades, the university was were you should be. I don’t know what percentage we amounted to, nor do we have reliable information about the social background to which our current students belong (do all middle-class children attend university??). My impression is that the upper and upper-middle classes are attending private universities either in Spain or abroad, with the Spanish public universities attracting mostly low-middle and working-class students. My own university, I grant this, might have a much higher percentage of working-class students than the Universitat de Barcelona given, precisely, their geographical provenance, as the emeritus professor highlighted. Still, we have no hard data and are quite in the dark about all this.

When I discussed this matter of the social background with other colleagues quite like me, they were quite offended, seeing themselves as examples that the working classes include many individuals of high academic ambition. They also made a point of noting that the middle-class children in our upwardly mobile families and in more traditional families are not distinguishing themselves academically and that the number of readers is fast declining in all classes. I often remind my classes that whereas many aristocrats were key participants in culture of the past centuries (think Sir Phillip Sydney or Lord Byron) now it’s hard to see any very rich person producing culture –they just seem interested in purchasing it (or in sponsoring it in the best-case scenario). But just bear with me and let me propose for the sake of argumentation that our emeritus professor is right and that the falling standards are the result of opening up university education to the working classes.

I’m mystified by her impression that conversation is more limited in working-class families. I confess that one of the main enticements that a university education offered to me as an 18-year-old was the chance to hold ‘better’ conversations, meaning more fulfilling intellectually. This fantasy was fuelled by countless pre-1980s novels and films which seemed to promise that the grass was greener on the other social side; yet, conversation, as we know, is fast disappearing from the novel and almost gone in films (and TV) and, as Sherry Turkle argues, it’s also vanishing from our daily lives under the impact of the social networks. As Dani Mateo joked yesterday on El Informal, the Twitter generation cannot speak further than 140 characters, which quite limits dialogue.

Do middle- and upper-class families have ‘better’ conversations? Is, in short, intellectual exchange and intellectual curiosity stronger in more affluent families? I should say this is not the case at all. Furthermore, I actually make the upper and middle-classes responsible for the falling standards in our universities, on the grounds that if they had kept the conversation going on at the same pace as when they were alone in the Spanish university classrooms, the rest would have joined in. One can only feel spurred onto proving him or herself when their social betters (excuse me!) pose a challenge. In a society in which the upper and middle-classes have abjured the task of being active cultural leaders, conversation stagnates. Even worse, it starts dealing with the Kardashians (and I don’t mean from a Cultural Studies point of view). This could also be a case of the conversation stopping in mid-sentence when us, the working-class interlopers, tried to join in back in the 1980s, and moving elsewhere. Or perhaps it just stopped for good when being a person of culture started being a synonym of being boring and, excuse the Americanism, unpopular.

One of my (middle-class) classmates in the first year used to carry a copy of Ulysses under her arm at all times, which certainly sounds extreme as a show of academic commitment. Funny to think I didn’t find her ridiculous. I felt, rather, awed that she had the spunk to advertise herself in this way and sheepish that I had not read the book. Perhaps, poor thing, she was just looking for deep, intellectual conversation… without realizing she was scaring people away. Or perhaps her Ulysses was intended to be a gauntlet to slap her classmates into a literary duel that would put them in their right, proper place. What I wonder is: where has her type gone? Who would today come to class ready to challenge their peers in this in-your-face way?

Who could re-start the conversation?…

Comments are very welcome! (Thanks!) Just remember that I check them for spam; it might take a few days for yours to be available. Follow on Twitter the blog updates: @SaraMartinUAB. You may download the yearly volumes from See my publications and activities on my personal web


maig 10th, 2016

Once, while still a second-year undergrad, I took a year-long course on 18th and 19th century Spanish fiction during which I never met the teacher face to face. No wonder I have forgotten her name. She was a brilliant lecturer and I recall fondly many of the books she lectured on, a selection which included some hard reading, such as Friar Benito Feijóo’s Cartas Eruditas. I passed the corresponding final exam but, as I say, I never interacted with this teacher nor with any of my peers in class, as she never addressed us directly nor did she ask for our thoughts and opinions. I did go through her extensive reading list because I’m the kind of reader that reads even the information on cereal boxes. I can’t say, however, whether my classmates read any of the texts or simply swallowed our abundant class notes to regurgitate them back to our teacher on exam day. Yes, she was brilliant, but was she a teacher? Not in my view…

There was another teacher whose lectures, the rumours suggested, hadn’t changed in years. A kind, anonymous student had photocopied his or her class notes and these circulated among us, the new students, freely. We simply took said photocopies to class to underline the main points as the teacher lectured on–the notes were practically verbatim and we were amazed to see that she hadn’t altered a single word for years, jokes included. This teacher eventually discovered the famous photocopies and, I’m told, published her own lecture notes as a book. If there was little point in attending her classes knowing how reliable the photocopied notes were, just imagine what the handbook must have done to students’ interest in spending time listening to this teacher. My point being that classroom time must be used for interaction between teacher and students, for students can always read at home the corresponding handbook.

The Department of English at the Universitat Autònoma de Barcelona, where I have spent my academic life since 1986, first as a student then as a teacher, simply does not believe in lecturing and it never has. My class notes as a student did not reflect what my teachers lectured on but what I found interesting as they read and commented on the texts with us (partly their ideas, partly my own); I did have pages and pages of notes but these came from my autonomous, independent reading of the set texts and of the background texts (handbooks or other secondary sources). And I was satisfied with that. After going through the courses offered by the two teachers I have already mentioned, I found the interactive approach frankly refreshing; I spent the first semester at UAB marvelling that teachers actually admitted questions in class and welcomed students into their offices for even more questions.

Of course there were and there are lectures but they constitute just a small part of our teaching practice, perhaps around 20% or 25% at the most. I myself don’t keep a formal set of notes for each course, but, rather, a class diary where I jot down the basic arguments for each single session. And if there is something I love about teaching Literature and Culture this is how open and flexible it can be. For instance: I started my class yesterday teaching my students the word ‘propioception’ (a 1890s word meaning the individual’s ability to connect with his or her own body, which can be impaired by neurological disease). I had learned this word literally on my way to class, as I read on the train Oliver Sacks’ best-selling The Man Who Mistook his Wife for a Hat. It turned out that ‘propioception’ explains wonderfully Richard Morgan’s SF novel Altered Carbon, which I started teaching yesterday. The protagonist, Takeshi Kovacs, is used to switching from body to body, as in his world individual identity resides in a tiny device, the cortical stack, which records personality and which can be easily transferred to a new ‘sleeve’. Kovacs has, in short, a very high propioceptive ability to connect with his new sleeves. There you are: I love the improvisation that comes into teaching and could never limit myself to a lecture prepared in advance, and re-used year in and year out.

This must certainly sound strange to teachers working in the British system (or similar) which distinguishes between carefully planned lectures delivered before a crowded classroom and more open seminars shared with a small number of students. In my Department we simply prefer to turn ALL our classroom time into seminars, even when our classes are as big as 80 students. An important justification for this, of course, is that our second-language students need to practice English and, so, class participation is basic in our methodology. Students read the texts at home, prepare their notes, exercises, and remarks in advance. Classroom time consists of a lively exchange that makes the time fly by, for students are extremely interested in learning and love to engage in debate with us and their peers. We, teachers, feel fulfilled and offer our best, raising standards as our students demand, always happy to get such positive response to the many hours and hard work we put into our teaching.

This, of course, has never really happened and is not happening at all currently. Now, after 25 years of struggling to implement this healthy academic ideal I am about to give up and start lecturing. Our methodology, the methodology suggested by all the documentation about the new degrees established in 2009, and all the college-level pedagogues agree that lecturing, the famous ‘lecturas magistrales’, should not have a primary place in the university. We are expected to be, and we do want to be, Platonic teachers in constant academic dialogue with students keen on learning (remember? Plato’s Athens school was called ‘The Academy’) but it is simply NOT happening. Our students’ passive resistance is simply colossal. And they are getting the upper hand.

I was teaching yesterday my session on Morgan’s novel and I started hearing myself speak, a very uncomfortable feeling. This happens when even though you don’t want to lecture, you find yourself lecturing because the students have not read the book (yet?) and, so, you need to cover much more basic ground than you expected. Then you start feeling disengaged. I saw my students taking notes and I felt uncomfortable because I was not delivering a formal lecture and I have no idea which points they are making a note of. Dialogue on a novel which has not been read soon grinds to a halt, and so I keep bringing into my ‘stream of pseudo-lecturing’ outside elements. This doesn’t always help, quite the opposite: I was trying to explain that Morgan’s protagonist is the high-tech, futuristic equivalent of the Navy SEALS that killed Osama Bin Laden five years ago–but neither of these two concepts rang a bell with my students. Of course I reacted in dismay, and of course they reacted to my reaction also in dismay… are we ever going to be on common ground? I get politely interested faces mostly, but also the teacher’s worst kind of kryptonite: the glassy stare. This makes me lose my thread, start rambling and even mumbling… There are many moments when I feel like stopping to ask: if you tell me what interests you, perhaps I could lecture on this and we would all be so much happier. Perhaps.

I was going back to my office in quite low spirits when I came across a Language colleague who also looked dispirited. Some students in her class, she explained, have objected to some of her teaching methods finding them, basically, excessively interactive (meaning too demanding of students’ attention in the classroom). She was anxious and concerned that students simply want us to lecture, providing them with the kind of neat classroom notes that, well, can be photocopied from year to year. She vehemently declared she would not offer that kind of teaching and I wholeheartedly agreed with her – no, I will never ever turn lecturing into the foundation of my teaching!!!! I can only call myself a teacher if I keep a dialogue with my students and lecturing is a monologue!!! Out with it!!!

When I finally reached my office I started considering how much easier my life would be if I taught the same course every year, using formal, written down lectures that I could upload at the end of each session, without altering a single comma from year to year. And how thankful students would be for that: notes to circulate, underline, regurgitate in exams and then forget. Final exams instead of continuous assessment, no papers in which you need to develop your own thesis, no contact whatsoever with the teachers, not even to greet them in the corridors. And so end the continuous pretence that students read, when they don’t; and so end the gruelling task of engaging them in reluctant dialogue which only serves to stress the state of our miscommunication…

Some one said once that the tragedy of teaching is that it can never work, for we teach in the way we wished we had been taught and not in the way the younger generation in our classrooms prefers. I’m thinking that after almost 25 years as a teacher I should be wiser but I find that the effect which time has is the opposite: I simply don’t know the young persons in class and what kind of teaching they do prefer. We, teachers, commiserate with each other in the Department corridors and I’m sure the students commiserate with each other at the bar. The result of all this, as I wrote in my previous post, is that even vocational teachers reach a point in their careers in which they stop caring and I am worried sick that this is coming to me – for I still have at least 15 years more to teach. Teach, not lecture.

Comments are very welcome! (Thanks!) Just remember that I check them for spam; it might take a few days for yours to be available. Follow on Twitter the blog updates: @SaraMartinUAB. You may download the yearly volumes from See my publications and activities on my personal web


maig 5th, 2016

(No, I’m not suffering from writer’s block, which would be ironic given my last post. The problem is that every subject I’ve come up in the last ten days for raving and ranting about here is so problematic that I have given up all of them. The one I am dealing with her seems to be the safest one… Yes, there is a measure of self-censorship at work here.)

I’ll be 50 in about one month, a figure I like. For women, 50 tends to be associated with the biological changes caused by the onset of menopause and although it would be tempting to write a post about the cultural readings of this natural transition this is not what I am up to today. Some other time.

In this strange time in which we seem to be stretching a whole decade into the next one, I am constantly being told by kind friends and relatives not to worry for, after all, 50 is the new 40. This confuses me very much because a) 50 is 50, as 40 is 40, b) since this chronological stretching manifests itself for all decades and everyone seems younger than people the same age did thirty years ago, 50-year-old women look distinctly like 50-year-old women.

Famously, Oscar Wilde declared that “The tragedy of old age is not that one is old, but that one is young” which, of course, means that one is not aware of one’s own ageing in the same way others are. I am not kidding myself that I am still 20 inside, however, for I am surrounded by 20-year-old female students and it would be foolish of me to pretend that I’m younger than I am. The young have an instinct for detecting that kind of phoniness… Also, generally speaking, I find myself enjoying my actual age and gleefully celebrating each new birthday. The only thing I certainly don’t care about is being addressed as ‘señora’ by strangers, a term I certainly prefer to the appalling ‘señorita’ used for young women but that is often used with a sneer or, at least, a clear wish to indicate ‘you’re old and I’m not’. Twice already, courteous young people have offered me their seat on the train, which I’ll attribute to my always carrying too many bags rather than to my ageing looks. Hopefully…

A few weeks ago a dear male friend whom I have known since we were both 14 hit 50. He is also an academic (mixing Sociology and Media Studies), though he has been a full professor for a few years already and, hence, as you will see, in a slightly different frame of mind. It’s always funny to discover that the processes one goes through regarding private matters–like how to face the next period of one’s life–turn out to be shared by many other people. And this is what happened with my friend which, surely, you can also attribute to our having parallel academic lives. We both agreed that when you turn 50 and you are a ‘privileged’ academic, secure in his or her job, the new buzzword looming on the horizon is ‘retirement’. This may sound callous and insensitive to the scholars still struggling for tenure (and at the rate we’re going now, this includes colleagues not much younger than myself) but it’s the truth.

I was hired by my Department aged 25, which means that next 15 September I’ll be celebrating another significant date: my 25th anniversary as a university teacher. Even if I retire at the ripe age of 70, as Spanish legislation allows, this means that my career can stretch for just 20 years at most. Naturally, it could stretch longer if I go on publishing academic work past retirement, for, essentially, retirement means for us that we stop teaching. If we can afford it. Precisely, we have started asking our Department colleagues about their plans for retirement, for it turns out that 6 of them are aged between 59 and 63. This is a bit awkward but we just need to know what we’re going to do with our fast ageing tenured staff in the next ten years. Their reactions were diverse but, from what I see, money is the main concern.

Until before the crisis civil servants (and tenured university teachers belong in that category) could draw a pension after only 30 years of service which means that, if you were willing to accept the reduced pay, you could retire before 60. The IP I have been working with in the past few years retired at 57, though she is still very much active in research. Under this rule, which no longer applies, I could have retired at 55, which sounds totally crazy to me. Provided I can afford it, then, I am planning for 65 or 67 at the most because a) 40 years as a teacher will do, b) I don’t see myself connecting with students almost 40 years younger than me and c) I see too many people dying around 60 to believe I’ll reach 93, the age my grandmother was when she died last summer.

Sorry to sound so grim but I’m an extremely pragmatic person and in view of what I see happen every day, I need to take death into account. Yes, it’s the fear of mortality that so much Literature talks about and it is certainly the hardest part of ageing. Funnily, I went through a very profound hypochondriac bout at 30 when I was writing my PhD dissertation, mortally afraid (ha, ha…) that I would die before finishing it. Realizing, once the thing was submitted, the silliness of it all, I decided to face life as it came in a kind of perpetual ‘carpe diem’ (highly recommended against hypochondria).

I am certainly digressing today… must be my ageing brain…

The conversation with my friend revealed that 50 is when you count your academic eggs in the basket and ponder what they are worth and whether you want to go on producing them at the same crazy rhythm. The answer is no. A relative no. In the Humanities 50 is still a rather young age, the time when you may turn out to be ‘wise’, if that word still makes any sense, after decades of reading. It is also the age in which you tell yourself that ‘since what I love doing is reading, why don’t I simply use all my time for it?’. It’s very tempting. This is why the ages between 50 and 55 are, I’m sure, the time when many researchers start to slow down, not because they lose interest in their subjects (quite the opposite) but because they want to be let alone by a system that demands an absurd, stressful productivity offering very little reward.

At this point and after twenty-odd years of teaching my friend has decided to teach exclusively online, a possibility that his university offers; another dear male friend chose to transfer to UNED at a similar age. I have tried online teaching myself and I know that I need personal contact with my students, but I also know that this year for the first time I am teaching in a more detached, mechanical way, pretending I don’t notice the students’ disinterest (with few exceptions). My sociologist friend has run a diversity of research projects and is a well-known scholar, with an enviable h-index and all that. Possibly because he is already a full professor and, hence, lacks the enticement (carrot?) of becoming one I can see he is fast losing interest in accumulating more achievements. He is clearly aiming at pleasing himself in his research and this is what he advised me to do–a course for which I am certainly aiming. As my friend told me, the way we’re valued should be a logical result of our academic career, meaning that if you go out of your way only to accrue merits you’re heading for deep disappointment.

I have in my own Department and among the six most senior colleagues past 59 good examples of academic hyperactivity, one in particular who positively bloomed when reaching 50 or thereabouts. This is always an enticement. What drains the energy of any ageing scholar are the achievements of the very young, for this is when you start thinking that you’ve already missed the chance to do this or that. Perhaps one of the most glaringly overlooked aspects of our academic monitoring system is that its obsession for productivity is ageist, in that it requires an amount of energy impossible to sustain in the long run. Or not just impossible but also counterproductive, for past certain age one starts losing the concern about what others may think and this is how academic careers dwindle and evaporate.

To sum up the argument here, while most people place the midlife crisis around 40 (at least the Spanish idiom is ‘la crisis de los 40’), I find that for a Humanities teacher/researcher this happens, rather, at 50. It is not, however, a sad time in which you bemoan what will not be but a happy time when you start enjoying what I can only call, in the best sense, maturity.

Comments are very welcome! (Thanks!) Just remember that I check them for spam; it might take a few days for yours to be available. Follow on Twitter the blog updates: @SaraMartinUAB. You may download the yearly volumes from See my publications and activities on my personal web


abril 26th, 2016

I keep on telling my students that nobody is doing research on what I call fabulation–the writer’s ability to string together an imaginary story–but it turns out I am partly wrong. My mistake lies in having supposed that this research should be a branch of psychology when it is actually also a branch of biology and, to be more specific, of neuroscience. If this is the case, then I am not surprised that I have missed its existence because I feel a certain mistrust for neuroscience. This is grounded on my totally bigoted belief that neuroscience is trying a bit too hard to explain human emotion as a set of biochemical reactions. Call me Romantic, but I do not look forward to the day when human nature (I was going to write ‘soul’ but then I recalled I am an atheist) is fully explained by rational science–a point I have already made here. But I digress, as usual.

I have been given a wonderful little book for Sant Jordi (book’s day here in Catalonia), a classic of American journalism: Joseph Mitchell’s The Secret of Joe Gould (1964). The book actually contains two pieces by Mitchell on Gould, a.k.a. Professor Seagull, written at two different moments of the relationship between the two men. Gould, a bohemian gentleman, very popular in New York’s Greenwich Village, managed to eke out a precarious living by convincing his sponsors that they were contributing as patrons to his writing of an American masterpiece: An Oral History of Our Time. Mitchell discovered the secret mentioned in the book’s title, which I am not going to reveal, and this rounds off a unique portrait of a unique personage. If you’re curious, read the book, or see the film adaptation with Stanley Tucci as Mitchell and Ian Holm as Gould. Furthermore: see for an alternative version which questions Mitchell’s conclusions the article by Jill Lepore (

Sorry about my ignorance of topics which, I’m sure, must be very well-known by my Americanist colleagues but it turns out that the book on Gould was Mitchell’s final volume: he suffered from one of the worst cases of writer’s block ever, and could not manage to write anything between 1964 and his death in 1996, even though he spent many hours every day in his New Yorker office. The words ‘writer’s block’ send chills down my spine because this is a mysterious condition which does affect all types of authors for reasons ultimately unknown (and we, academics, are also authors). In some situations, writer’s block is to be expected such as when a novelist who has published an immensely successful first novel simply cannot produce a second one. In other cases, such as Mitchell’s, there is no clear reason why writer’s block happens. My personal belief is that his case, as I am sure many other people have theorized, may have had to do with the impact of Gould’s work on An Oral History of Our Time, which perhaps unleashed deep-seated fears in Mitchell that he could not write at all. I simply do not know whether Mitchell tried to be cured but the point is that his case is mentioned in The Midnight Disease: The Drive to Write, Writer’s Block, and the Creative Brain a book by neurologist Alice W. Flaherty, which back in 2004 was a controversial pioneer in a new field. Funny how, despite the many volumes on Literary Theory which I have read in the last 10 years, none mentioned Flaherty nor any other volume remotely similar to hers.

I have not read Flaherty yet but I have learned from her a new word I had no idea existed until yesterday: hypergraphia, the opposite of writer’s block. In a promotional interview (, Flaherty explains that hypergraphia is “driven, compulsive writing” triggered by “known brain conditions” involving “the temporal lobes”; also, and this is a puzzling sentence, “hypergraphia seems to reflect a component of literary creativity, namely creative drive”. Ironically, one of the most hypergraphic authors, Stephen King, also became a most famous sufferer of writer’s block after being hit by a truck. You’ll see now why I distrust neuroscience: after diagnosing 70% of all poets as “manic depressive” individuals, Flaherty makes the classic claim that “in women, there’s evidence that creative ability varies with the menstrual cycle. Plath illustrates this very vividly”. This, as we know, cuts both ways: some feminists will see the ebbs and flows of women’s body as part or source of our creativity, others (like myself) will be horrified by yet another attempt at picturing us as poor things (animals?) tied to our menstruation. Really… Flaherty stresses that while the treatments for writer’s block seem to work well and are much in demand from those afflicted with it–unsurprisingly… –those affected with hypergraphia do not seek professional medical help. “What right”, does Flaherty wonders, “do I have to give a medical name to a character trait that people value in themselves?” Indeed. By the way, Flaherty stresses that “talking about creative drive in neurological terms does not have to degrade the experience or value of creativity” and that “the medical terminology can coexist with the equally important, more subjective language that we are more comfortable with”. I’ll stick to the ‘subjective’ language for the time being, being a Humanist and not a scientist.

The field beyond Flaherty is so big I do not know how to start wandering into it, for there is, of course, a whole discipline called ‘Creativity Studies’. To begin with, you may check the Tennenbaum Centre for the Biology of Creativity at UCLA (, founded by the kind of eccentric tycoon that I had thought extinct since Orson Wells’ Kane (Michael E. Tennenbaum even has a glass castle). As I should expect, many psychologists devote their research and practice to creativity. Division 10 of the American Psychology Association, which gathers them together, deals with “interdisciplinary scholarship, both theoretical and empirical, encompassing the visual arts, poetry, literature, music and dance” (see There is even a journal, Psychology of Aesthetics, Creativity and the Arts ( Many articles seemed informed by affect theory and deal with reception, but none, as far as I can see, with fabulation in the sense I am using it. Most likely, I need to search further.

I am particularly interested in the writer’s fabulation in the field of the fantastic, in particular science-fiction, although I would certainly agree that realistic fabulation is equally important. So far, however, we, literary scholars, have failed to explain where Madame Bovary comes from in the same way we have failed to explain the origins of Dracula. We speculate endlessly on whether a certain biographical event or the impact of a text read by the author is connected with particular plots points or characters but the method thus far followed is full of errors. Biographical research often degenerates into mere gossip and intertextual connections are frequently vehemently denied by authors. The formalist rejection of the personal to focus on the textual seems in this context quite convenient but, of course, it is ultimately unsatisfactory as texts happen to emerge from people’s brains.

There must be, however, a middle ground between the claim that Rose Maylie’s near death in Oliver Twist was inspired by the real death of Dickens’ young sister-in-law, and the claim that Rose Maylie emanates from a neurochemical reaction in Dickens’ frontal lobe triggered by God knows what… This is where I would like to go and explore… If I found a writer patient enough, I would beg him or her to examine at the end of the day the process of fabulation they have followed. Writers love to talk about their technique even when they claim that it is all a bit nebulous and characters seem to follow their own paths (I’ve never read an article about this often repeated claim). I would end up this way with something similar to the ongoing director’s comments in the Blue Ray or DVD edition of films. But, then, no writer, I’m sure, would want to have an academic looking over their shoulder as they write… Pity… If you know of any, let me know!

One day some scientist will discover that the predisposition to read and the predisposition to write and/or fabulate are genetic, perhaps a mutation, and we will finally understand why those of us who love Literature feel increasingly like freaks.

Comments are very welcome! (Thanks!) Just remember that I check them for spam; it might take a few days for yours to be available. Follow on Twitter the blog updates: @SaraMartinUAB. You may download the yearly volumes from See my publications and activities on my personal web


abril 21st, 2016

The post today refers to three situations connected with publishing books. The first one is the presentation that two independent editors gave recently, to an audience mainly composed of my students, explaining how a small press works. The second is the publication of a collective book to which I have contributed an article. The third are my attempts to get an academic book in Spanish published. Actually, I’ll add a fourth point having to do with desktop publishing programmes. It all connects, you’ll see.

I have recently met Hugo Camacho, a young man with a degree in ‘Filologia Inglesa’, who runs single-handedly Orciny Press ( A week ago he offered a presentation at the bookshop Gigamesh in Barcelona together with his colleague Ricard Millán, who runs another small press, Sven Jorgensen ( I had agreed with Hugo that I could ask as many questions as I wanted on behalf of my students and so I did. The result can be seen at, an hour full of very interesting information about how the business of publishing books looks from the side of the (small) publisher.

I could comment on what Hugo and Ricard explained point by point and never be done, which is why I’ll highlight just one issue: the mathematics. An independent publisher, they explained, self-distributes by selecting sympathetic booksellers. Small presses like theirs tend to be one-men (or one-woman) shows in which most of the tasks that occupy ten-person teams in middle-sized presses are done by just one person. The maths: they publish, generally speaking, from 7 to 10 books a year, perhaps 12 at the most. A typical print run is 150/250 books, though volumes are also offered through print-on-demand services, and as e-books. The habitual distribution of benefits works like this: 10% for the author, 30% for the distributor, 30% for the publisher, 30% for the bookseller. You need to deduce from all this taxes and costs (in the case of the publisher these include items such as translation, book design, text composition, style correction and proof reading). Small presses tend to cut the middleman off, that is to say the distributor, but even so if your 10 books sell 200 copies each at 20 euros–and that’s supposing a lot–we’re speaking about 40000 euros, of which 24000 would go to the small press. It’s not much… By the way, the author of each of the books would get 400 euros (before taxes).

At least, authors are paid by small presses but (and this covers points two and three) for reasons I have never quite understood when you publish an article in a collective volume you never get paid; I’m now finding out, besides, that when you try to publish an academic book in Spain you’re expected to pay in more and more cases.

I have recently published, as I say, an article in a collective book. I’m very pleased with the volume and with the work of the editors, particularly because they commissioned me to write a piece which has pushed my research in interesting directions which I had only half-considered before. I don’t wish to name the title of the volume, nor the publishing house because this is irrelevant for the point I’m trying to make, namely, that the volume (hardback, 250 pages) is on sale at the price of €99,00 ($128.00). This is not exceptional at all. Xavier Aldana’s excellent Body Gothic, which I have already mentioned here, is sold at £95.00 or $160.00 (both hardback and e-book!). I ended up asking for a copy to write a review and that’s how my institution’s library got it (I won’t say a word about the book being in the basement depot, out of sight).

If I, with my Senior Lecturer privileged salary, need to blink hard and think twice before spending €99,00 on a book, imagine what it’s like for undergrad students. Or is the other way round? Are publishing houses demanding these fantastic prices because students (and teachers) have stopped buying academic paperbacks? More questions: how do young researchers writing their PhDs manage? And how many people will read our exciting collective volume? Can a €99,00 book make an impact? How many copies can be sold all together? 400 world-wide at the most? Can we really ask our Departments and university libraries to spend so much public money on high-priced books? Is this all part of the general trend to re-directed academic publishing in the Humanities towards journals? At least, we’re not paying to be published in journals–or are we? A look at the Spanish market for university-produced books reveals that here the prices for volumes in the Humanities are not that high. Check, the bookshop of the Unión de Editoriales Universitarias Españolas, and you will see that our national university presses are still selling available paperbacks (most for under €25), some of them truly cheap in their e-book version.

I must at this point declare my incompetence, for I see colleagues announce on our AEDEAN list volumes published with major Anglo-american academic presses and I wonder why the impossible fifteen years ago has now become, if not exactly common, at least feasible. I’m mystified. We, Spanish English Studies specialists, tend to publish less in Spanish, which is why I decided to try to publish in this language a selection of works I have already published elsewhere in English. The first lesson I have learned is that when you ask for permission to reproduce articles published abroad in collective books, the publishers drag their feet. I’ve been given permision to translate myself and upload the resulting translation onto the digital repository of my university but not to use my own translated work in a Spanish book. Odd. Journals seem more flexible. The second lesson I’m learning is that publishers expect to be paid, in principle with money from research projects. I don’t know how this works in other projects, but I’ve always been in large groups with limited funding, which has gone to a great extent into the collective books published by Spanish houses but not into books by individual researchers. When I asked my previous group whether they could help me to publish my projected volume they said no, on the grounds that if everyone else made the same petition that would quickly exhaust our scant funding. I’m talking about a figure between 2500 and 6000 euros per book.

I have a bad experience of not being paid royalties for two of my books by a commercial publisher so it’s not the case that I expect to get money from any volume. I’m not, however, willing to pay for publication out of my own pocket if I can help it, not only because I already invest a good deal of my salary in my career but because if you pay, then this is a vanity publication, which should not count for our CV. Funnily, Hugo and Ricard, the small press owners, were very proud to stress that they do not charge authors. I think that the book I’m working on is attractive enough to justify that a university press publishes it but I was told by the publishers I visited yesterday that my potential audience is actually limited to just a few hundred, with luck. Naturally, they are reluctant to invest money on my book and prefer that I finance it, or co-finance it. Now, my question is whether most of the many books I see on the UNE bookshop have been published in this way. I’m mystified, more and more so.

I told my potential publishers in what was, believe me, a very friendly conversation, that if there is no market for my product then I would be very happy to self-edit my volume and upload it as a .pdf onto UAB’s repository. I already have more than 50 documents there, not including syllabi and the dissertations by my tutorees, with more than 15,000 downloads in total (talk about impact…). I was asked what my documents looked like and when I showed one example (an article) edited using Word, it was hinted to me that I would need professional services to publish an e-book. I felt so mortified that about five minutes later I was asking Hugo Camacho what do professional publishers use (Adobe InDesign) and my university whether we have a licence for that (no, too expensive, we don’t even have one for Acrobat beyond Reader). A colleague has suggested Scribus, a free desktop publishing programme; I’ll give it a try. So, there you are: now I intend to train myself in pseudo-professional desktop publishing to make my own e-books. The things we university teachers do…

So, here’s my conundrum–and, yes, I think that I’m asking my readers a direct question. What should I do?

A. Try to find a (hopefully prestige) commercial publisher outside the academic circuit and aim at a general readership (target: 800 copies?)
B. Convince my potential academic publisher that my book is worth publishing, perhaps in co-edition with another university press (target: 400 copies?)
C. Pay to be published by said academic publisher or another (target: 400 copies?)
D. Produce my own professional-looking e-book and make it available for free on DDD (target: 1500/2000 copies, judging by previous volumes)
E. Produce my own professional-looking e-book and make it available for money on (really?), or a specialized platform (is there one for academic work? Hugo and Ricard use Lektu and I could do so, but it’s not academic)
F. Persuade AEDEAN that we fund our own e-book platform for English Studies and that we give the books away for free as we do with the journal Atlantis
G. Fund my own online academic small press and invite colleagues to publish with me for free, provided they produce their own e-books

You tell me… (and guess which options do not count as valid academic publications for the Ministry).

Comments are very welcome! (Thanks!) Just remember that I check them for spam; it might take a few days for yours to be available. Follow on Twitter the blog updates: @SaraMartinUAB. You may download the yearly volumes from See my publications and activities on my personal web


abril 12th, 2016

My doctoral student Josie Swarbrick, who is working on the representation of monstrous masculinity in SF cinema, visited last week my SF class to offer a presentation based on one of her dissertation’s chapters, the one on District 9. In that film a massive alien starship reaches Johannesburg carrying thousands of refugees who have nowhere else to go. Their unenthusiastic South African hosts decide to lock them in an insalubrious township placed in, precisely, District 9, as they decide how to cope with these unwelcome, unsightly visitors. If you have seen the film you know that the central plot concerns the accidental transformation of a pathetic white man into one of the frankly disgusting ‘prawns’, a metamorphosis usually read in the context of post-Apartheid policies but that Josie is reading taking into account this man’s strange fall out of the human species.

District 9 is exceptional, as any SF fan knows, because it changed the trope of the alien invasion in cinema by turning the extraterrestrial visitors into refugees and by setting the action outside the habitual US context. Its closest precedent is possibly Alien Nation (1988, TV series 1989-1990), in which the aliens are not invaders, either, but runaway slaves seeking refuge from their masters in the Los Angeles area. Men in Black (1997) included a scene showing the MIBs patrolling the Mexican border, trying to make sure that no illegal aliens would cross it. In the more recent Monsters (2010) the metaphorical link between the extraterrestrial alien and the illegal human migrant is emphasized: the monsters of the title have invaded most of Earth and only the USA remains a safe haven for humans–or so Americans think. Like real Americans today, the fictional Americans of Monsters seem to believe that migration can be stopped, which is never the case.

I started a conversation about the aliens in these films and I asked my students what kind of stories we could tell, taking into account the shameful humanitarian crisis affecting the poor refugees stranded in Turkey, Greece and Eastern Europe. Imagine, I said, that a spaceship similar to the one in District 9 lands on Nou Camp, here in the middle of Barcelona… How does the story continue? And the students laughed. As Laia explained, one can easily imagine President Obama addressing visitors from outer space, but the idea of President Rajoy doing the same is simply hilarious (President Puigdemont seems to be slightly less hilarious, but still…). Laia herself added that if a spaceship landed in Spain this would result in another episode of Aquí no hay quien viva, the popular TV sitcom about a group of raucous neighbours.

At the end of the 1960s, Carlo Fruttero, editor of the SF publication series Urania, the most important one in Italy of its kind, was asked why he never published Italian SF. Famously, he replied that it “was impossible to imagine a flying saucer landing in Lucca”, a controversial statement that, of course, only spurred the imagination of Italian SF authors. I’m not familiar with Italian SF, and not even that much with Spanish SF, and I don’t know whether a starship has ever landed in Lucca. I know that Spanish writer Tomás Salvador produced an absolute masterpiece, even translated into English, with his tale of a generation ship, La nave (1959). Of course, this ship never lands in Franco’s Spain, it has been already travelling in space for many years when the story begins.

The aliens, curiously, have trodden Catalan land in at least two very well-known novels. One is Manuel de Pedrolo’s Mecanoscrit del segon origen (one alien at least is stranded after her companions manage to massacre practically all humans before abandoning their intended colonization project). The other is, there we go again, the hilarious Sin noticias de Gurb by Eduardo Mendoza (serialized 1990 in El País, published as a volume in 1991).

In this novel the eponymous Gurb, a metamorphic alien, takes the physical appearance of singer Marta Sánchez (!) and decides to explore Barcelona, going awol. Another alien, a shy fellow quite disturbed by his mate’s French leave, follows his tracks also using a variety of human disguises, each more outrageous than the previous one. My fellow citizens respond with total dead-pan indifference to the absurd situations in which the poor alien finds himself in the midst of the chaotic upheaval of the city which preceded the Olympic Games of 1992. This is the funniest book of any kind I have ever read, more than Terry Pratchett and Neil Gaiman’s Good Omens, more than Douglas Adams’s The Hitchhiker’s Guide to the Galaxy. I remember re-reading it once on the train and having to give up because I could not suppress my laughter. And, to be honest, I would have been very happy to have written Gurb.

Mendoza practices in Gurb the very Spanish literary genre of the ‘esperpento’; Aquí no hay quien viva is its television version. ‘Esperpento’, usually associated with writer Ramón María del Valle-Inclán is, supposedly, a deformed mirror of Spanish society which emphasizes with great irony its worst traits, among them its vulgarity, widespread ignorance, excessive pride, lazy habits and so on. It connects closely with the older genre of the picaresque novel but goes much further in highlighting the grotesque in local Spanish reality. Any Spanish literary critic will tell you that there is no consensus on whether ‘esperpento’ is a deformed or an exact mirror image of Spain (by the way, in Catalonia we also have ‘esperpento’ as seen in the popular TV political satire Polònia).

I believe that ‘esperpento’ is the reason why Laia and my other students laugh at the idea of President Rajoy welcoming the aliens. Unlike what is often believed, the inability to imagine the aliens landing on Nou Camp or in Lucca has nothing to do with the alleged low technological level of Spanish and Italian societies, as both societies are like any other in the West in that sense. It’s not, either, a matter of occupying secondary positions in the world order for District 9 and Monsters show that being a world leader is no longer a requirement to the object of the aliens’ attention in SF movies. I don’t know how things work in Italy, but Spain is dominated by a terrifying low self-esteem which ‘esperpento’ tries to mask with humour. That might explain the lack of alien visitors.

I’m sure that many others have given far more satisfactory explanations of why Spaniards have generated ‘esperpento’ as a strategy to cope with Spanish reality. Also stuck in a similar post-Imperial decadence, England has reacted very differently–unless, that is, we come to the conclusion that The Hitchhiker’s Guide to the Galaxy and certainly Monty Python are also ‘esperpento’, and perhaps they are. Americans have also generated plenty of humour around the idea of the visiting alien. I’m thinking of some TV series: Mork and Mindy (1978-82) the sitcom that made Robin Williams a star, Alf (1986-1990) with its furry alien visitor or Third Rock from the Sun (1996-2001). In US culture, however, the humour at the expense of alien contact is perfectly compatible with the countless examples of fictional American Presidents facing alien visitors or invaders in far more dramatic circumstances. It must be, as I say, a matter of self-esteem. Theirs is so high that American cannot conceive of aliens visiting first other countries on Earth–I’m sure they would be flabbergasted if the aliens chose China.

One of the saddest films I have seen on the topic of alien contact is Óscar Aíbar’s bitter-sweet Platillos volantes (2003). This movie tells the pathetic real-life story of José Félix Rodríguez Montero (47), a textile worker, and Juan Turu Vallés (21), an accountant in the same Terrassa factory, near Barcelona, who committed suicide on 20 June 1972. Following the supposed call of the aliens and believing that they had somehow mutated, the two men lay down their heads on the tracks of the Barcelona-Zaragoza railway line convinced that dying was a one-way ticket to Jupiter.

I’m not the only spectator to have seen in these two poor deluded men the shadow of Don Quijote and Sancho Panza, although they seem to have been both Quijotes. Aíbar’s film is a singular portrait of a naïve, poorly educated Spain easily misled by fantasies of alien contact, as I remember from my own childhood (yes, I did believe in aliens then… now I want to believe). If this were an American film, of course, José Félix and Juan would have eventually met the aliens, proven everyone wrong, and been carried away by an breath-taking starship to Jupiter and beyond. Being Spanish, the film is dominated by Cervantes’s legacy and, so, must punish those who dare fantasize–or, rather, since this is a real-life story, the director is conditioned by Cervantes’ legacy to choose this sad tale, rather than a more uplifting fantasy, for his film. True, he made amends with El bosc (2012), but the damage is done.

To sum up, then, Cervantes + ‘esperpento’ + Spanish post-Imperial low self-esteem = no alien contact. And just in case you were thinking of this, yes, Rajoy with his inexistent English and his frequent gaffes seems to embody much of this inconvenient mixture. It is certainly easier to imagine Pedro Sánchez, Albert Rivera, Pablo Iglesias or Alberto Garzón, perhaps even Soraya Saéz de Santamaría (?), engaging in elegant alien contact on behalf of Spain.

Perhaps a clear sign of the decadence of American world leadership is that soon we may have to imagine Donald Trump welcoming the aliens–now, that’s ‘esperpento’…

Comments are very welcome! (Thanks!) Just remember that I check them for spam; it might take a few days for yours to be available. Follow on Twitter the blog updates: @SaraMartinUAB. You may download the yearly volumes from See my publications and activities on my personal web


abril 8th, 2016

This post is inspired by two sources: one, the article “The 2015–16 TV Season in One Really Depressing Chart” by Josef Adalian and Leslie Shapiro published online in Vulture (; the other the collective non-academic volume Yo soy más de series (2015, in which I have participated with, once more, an article on The X-Files. What these very diverse sources show implicitly is that the current boom around US TV series may well result soon in the destruction of television as we know it.

The article, quite brief, comments on the declining ratings for the “old-school broadcast networks” in the United States, or “Big Four”, regarding their current star product, namely, fiction series. They have lost “about 7 percent” viewers for “first-run programming” since the 2014-15 season, “continuing a pattern of substantial decline” in the last few years. The problem is attributed mainly to “a paucity of breakout hits” even though what seems more worrying is that “audiences appear to be abandoning established shows”, usually in the second or third season. You may next check the chart accompanying the article, which shows the ratings for a long list of series or, as they call them in the USA, ‘shows’. The authors claim that audiences have stopped being loyal to their favourite shows: “Now, in the era of viewing on demand, it seems audiences are increasingly having sordid affairs with new shows and then quickly moving on”. Of course, the problem for the broadcasters is that Nielsen rates connect directly to revenue for, remember, TV is basically one long ad interrupted by programmes. Streaming services have started competing for what the writers call “eyeballs” (the part for the whole, you know?) seemingly forgetting that the money business companies can spend on advertising is not unlimited.

Now let’s turn to the really juicy part of the article. Yes, you guessed right: the readers’ comments–far more relevant and informative than the article itself. Here are some highlights (see with how many opinions you identify):

*(…) there is such an over saturation of shows that it is forcing people to really pick and choose what they want to watch and thus people are ditching poorer quality shows that don’t work for them anymore. Or ditching long running favourites that have run out of steam.
* [the lengths of US network seasons] 21-25 episodes is just ridiculous, it’s not conducive to making a good product.
*(…) ad-based business models result in content that puts audiences into a soporific state conducive to being influenced by ads, while subscription-based models favour content that locks in passionate fan bases.
*Networks need to cultivate small, passionate audiences for their shows and recognize that the audience is now so splintered that huge audiences will be rare one-offs for special events.
*Cable and streaming services are investing in creativity, giving writers and creators more freedom to make interesting things. The Networks are sticking to the old formula, and seeing fewer and fewer returns. It’s a loosing game. They’ll be gone in a few more years.
*By the time Nielsen’s gets with the times, broadcast will be defunct anyway and all the shows will be on streaming services, which know exactly who is watching what and when, but has no motive to share that info with anyone else.
*Not only are networks competing with cool streaming shows that are new (…) but there’s entire runs of old series to discover.
*I will never again watch a new network show. Why bother getting invested in a show that is likely going to get cancelled? I vastly prefer the Netflix way.
*Loyal viewers are going to be more important than massive numbers in the future.
*I have a lot of shows I love and a lot of shows I like. I don’t care if they are on networks or not. I’m not depressed by this. Sorry.

And my favourite comment: “Thank God for books”.

Look at the paradox: the networks have always broadcast series but something changed about 25 years ago (arguably) with ABC’s Twin Peaks (1990-91), which proved that audiences were willing to enjoy new kinds of TV fiction series. Then the TV model changed radically with the introduction of new local and national channels (think Fox), satellite and cable TV. The current model also includes internet streaming services of which, obviously, Netflix is the most popular one right now. What all these diverse ways of watching fiction on a smaller screen (TV, computer, tablet, cell phone…) have in common is their trusting series to keep them afloat–logically, since series have that strange quality: they may last for years and keep an audience loyal to a channel/service (or so it was assumed). What broadcasters of any type don’t seem to realize is that the personal viewing time of each spectator (eyeballs, argh!) cannot increase at the same pace, hence the new ‘disloyalty’. Spectators feel that the market is indeed oversaturated and, so, navigate it as well as they can: some give up TV for good, others give up certain series. All tend towards the same goal: controlling their viewing time regardless of network interests and desperately old-fashioned Nielsen ratings. What is at risk, in short, is not the fiction created to fill our smaller screens but any TV business based on advertising, even TV consumption itself.

Now to the book, Yo soy más de series, coordinated by Fernando Ángel Moreno. You will find in it articles dealing with 60 series, all of them American with a few British and Japanese exceptions (Spanish TV is represented by El Ministerio del Tiempo). The articles are very different, some are 100% academic, others 100% personal and informal, some (like mine) a combination. Having read its extremely appealing 472 pages, the impression I get is of a gigantic collective failure by American TV series’ creators to produce truly solid work. Actually, this is my personal point of view and, of course, I have sought confirmation in the volume.

I have often voiced my post-Lost opinion that a narration that begins with no firm plans about its ending is not to be trusted, which is why I very much prefer mini-series. When you try to stretch a series beyond its natural run, when the series ‘jumps the shark’, then the series is doomed and what started as an exciting tale ends up as flat as a bottle of champagne left uncorked for a week. And this is what I see again and again in the articles of Yo soy más de series: with the exception of the mini-series (I, Claudius) or of the series planned for a limited number of seasons (Babylon 5) and a few honourable exceptions, most series outstay their welcome. The reasons may be that, as one of the comments I have reproduced notes, the number of episodes per season is too high, but whatever the reason is very few series can maintain the same level of interest and creativity for long. After the second season, which is when ‘eyeballs’ start looking elsewhere, the plot lines get more and more twisted as writers and producers run out of ideas struggling desperately to go on. The shows enter then a sort of entropic process of decadence that leads to their final, eventual implosion.

Funnily, I’m writing this at a time when The Big Bang Theory is keeping me glued to my sofa for hours at a stretch at least once a week. Typically, we decide to watch a couple of episodes and may end up watching eight in a row (they’re 20 minute long). I am not following any other series and, frankly, after reading Yo soy más de series the only one truly tempting me is Breaking Bad; we’ll see… An advantage of sit-coms like Big Bang, I find, is that it’s somehow harder to feel disappointed for they do not make such high claims as drama series do to being avant-garde narrative, even better than novels… If there is an opinion I hate about TV series is that one. I feel, in short, refreshed by Big Bang but oversaturated by soap-opera products masquerading as great TV–like Game of Thrones or Homeland.

I’ll finish with the story I tell in my own article for Yo soy más de series, a story I have already told many times: TV is paying in Spain a high price for having despised spectators in the past. If TeleCinco had not cancelled arbitrarily The X-Files just when the internet was entering Spain, we would not have rushed to become TV pirates. Once learned, the habit will not be unlearned. Illegal downloading is, simply, a central aspect of TV consumption in Spain, which does not seem to be the case in the USA. Here satellite, cable and streaming are, I’m 100% sure, second to piracy, while this no doubt as popular as actual TV broadcasting if not more. I wonder how Nielsen is dealing with this when it counts Spanish ‘eyeballs’ for we all wear a pirate’s eye patch.

Soon, if not tomorrow, ‘TV’ series will drop the ‘TV’ part of their name to be called something else, perhaps just ‘series’, for they will no longer be connected with watching television at all. Nielsen be warned.

Comments are very welcome! (Thanks!) Just remember that I check them for spam; it might take a few days for yours to be available. Follow on Twitter the blog updates: @SaraMartinUAB. You may download the yearly volumes from See my publications and activities on my personal web


abril 3rd, 2016

Channel-hopping a couple of Saturdays ago, I came across the documentary Classic Albums: Nirvana – Nevermind (2005) on BTV, the excellent local Barcelona TV channel (you may see the film here: BTV is, as far as I know, the only public channel I have access to which bothers to broadcast a weekly series on popular music, simply called Música Moderna. Amazing how music has disappeared from public TV in Spain… I think back to all the variety I could get as a young girl and I’m truly mystified (thank you, by the way, Paloma Chamorro, wherever you are, for La edad de oro!). Anyway, I digress (or not, as you’ll see). The documentary stirred in me plenty of feelings, memories and impressions I had almost forgotten and I’m trying to make sense of them here.

Two years ago I attended the ‘16th International Culture & Power Conference’ at the Universidad de Murcia and I had the great pleasure of listening to my colleague Rubén Valdés (U. Oviedo) deliver a paper on Joy Division. Finally!, I told everyone present, we start dealing with the aspects of anglophone culture that matter so much but that we dare not acknowledged in our academic work. After the ensuing exciting conversation, I came up with an idea for an article on the role of popular music in the awakening of English Studies scholarly vocations in Spain, thinking in particular of my generation, the ones born in the 1960s for whom Joy Division’s music had been an undeniable inspiration. I even drafted a questionnaire which Juan Antonio Suárez kindly reviewed for me but I haven’t been able to find the right moment to get down to work. Maybe after this post… My thesis, as you can see, is that popular music played a major role in leading many young aspiring scholars in the 1980s to choose ‘Filología Inglesa’, in combination with Literature and cinema (also TV). Many of us learned about Britain and the United States through their popular music: translating lyrics from English was, I’m sure, a favourite activity, as was attending concerts both in Spain and, with luck, in the UK and the US. We knew that this would never be the subject of our ‘proper’ research but the music never stopped playing.

Or did it? In my own case ageing has brought an increasing intolerance of background sound, which means that I have progressively lost the ability to work as I listen to music–now I need total silence. My otolaryngologist has given me very strict instructions not to use earphones and to attend very loud pop and rock concerts only sparingly… And so I have little by little disconnected from that indie avant-garde I used to know all about, also because now I easily lose my way in the endless lists of new bands, emerging one day and gone the following week. Students have also changed. Years ago I could rely on their suggestions but the last time I used music in my classes (‘Literatura anglesa del s. XX’, 2012-13), I found that they didn’t know who Kasabian are… Now I myself don’t know what Kasabian are up to, if they’re still together at all.

Back in, I think, 1999 I visited Professor Simon Frith in Glasgow, no doubt the main anglophone academic specialist in pop and rock ( My students have heard about this visit countless times… I asked him how he had managed to develop his amazing career in this field and he gave me a golden recipe: use an impeccable scholarly methodology and nobody will be able to object. I soon wrote a piece on US 1990s Goth star Marilyn Manson (, and I have subsequently written on gender in music videos (, Scottish female singers (, Linkin Park ( and, my favourite piece, on Kylie Minogue (with Gerardo Rodríguez,

Year in, year out I promise myself that I will teach an elective on pop and rock but I don’t seem to find the moment, feeling a bit hampered too by not being sure about which methodology to use in order to assess students. Like many other teachers, I assume, I have used lyrics in introductory survey courses to complement poetry. I used to ask students to choose their own lyrics and it was funny to see how they offered quite conservative proposals thinking they would please me (Bon Jovi???). There was a girl absolutely surprised that I chose Linkin Park’s “Somewhere I belong” for class analysis… but that was so long ago. One of my fondest memories is a really hilarious first-year session in which we tried to make sense of Nirvana’s “Smells like teen spirit”–just give it a try, worse than The Waste Land

The other fond memory connected with Nirvana –also a bitter-sweet one–belongs to 6th April 1994, the day after Kurt Cobain (b. 1967) shot himself dead. I was 28, still a doctoral student, and teaching somebody else’s syllabus for ‘Literatura Anglesa Moderna i Contemporània II’, the 19th century. I had to teach Walter Scott’s Waverley (1814), a novel for which I have not yet managed to feel any enthusiasm. When I heard on the radio first thing in the morning that Cobain was dead this brought me back to the day when Ian Curtis, Joy Division’s singer, killed himself (18th May 1980). I told myself, ‘If that day was crucial for my generation, then today is crucial for my students’ generation’ and, so, I spent the whole 90 minutes talking about all the iconic pop and rock figures that had died too early and how this connected with the Romantic idea of suicide (poor things, my students!!). No mention of Waverley… except to say ‘Who can care about Scott today? Not me…’

I did not intend then and do not intend now to support the idiotic cult around early suicide (or youth suicidal lifestyles). I soon became a New Order fan, which I remain to this day, and I can only say that I have a great deal of admiration for how the members of Joy Division decided to move on and become a new band, full of energy and preaching a radiant, hedonistic pleasure in life. The same applies to Dave Grohl’s career after Nirvana. Courtney Love, Cobain’s widow, called him many names, none of them nice, during the funeral; unlike her, I believe that suicides deserve compassion but I’d rather not turn them into cult figures. What the documentary on Nirvana’s hit album Nevermind brought back (and perhaps Amy, this year’s Oscar winner, also does that for the late Amy Winehouse) was the explosion of talent before the regrettable early death. Cobain’s case is crystal clear: he simply could not cope with the sudden, massive success of his band. I know that there is a deep contradiction in choosing a career as a rock musician and not thinking of the consequences of success but if you read, as I have done, Bernard Sumner’s recent memoirs Chapter and Verse: New Order, Joy Division and Me, you will see that this is a quite common contradiction.

Back to Nevermind (1991), what I most appreciated about the documentary was the insight into how inexplicable creativity of this very high quality is. The focus of the film is producer Butch Vig’s narration of how the album was made, accompanied by interviews with Nirvana band members Dave Grohl and Krist Novoselic. Vig gives many technical details about how he came up with the now classic Nirvana sound (the double voice tracks, and so on) and recalls beautiful accidents; the final version of “Lithium”, a song that gave band and producer countless problems, came from Cobain singing softly to Vig to demonstrate what he was after. Yet nothing and nobody can explain how everything cohered into the making of that landmark in rock history.

I love the album with a fan’s irrational passion and completely lack the musicological training to explain in scholarly terms why it is so potent–I can go on and on about male voices that transmit Romantic intensity (Curtis, Cobain, Chester Bennington…) but this is still an impressionistic approach based on a very personal preference. One thing the documentary seemed to highlight is that, as Michael Forsythe comments in the YouTube segment for the documentary, “The music scene today could sure use another Nirvana. I know there will never be another Nirvana but something to regenerate rock again and take it by storm before it dies”.

This might be basic generational nostalgia though I think I’m not alone in feeling, like this person, that in the last 25 years since Nevermind no other main event has changed the course of pop and rock with the same power. New high-impact artists have emerged (think Beyoncé) but they seem to be more about image-packaging than about the music. Kurt Cobain’s dishevelled, grungy look couldn’t be further from that… I could joke in very bad taste that if he started his career today, Cobain would anyway end up shooting himself rather than submit to the image manipulation that music artists routinely accept today. I have never seen a man with such a beautiful face make himself so ugly as a way to protect his music.

I’m thinking of the equivalent of BTV’s Música Moderna in 25 years’ time and wondering which classic albums will be revisited… By the way: has any PhD dissertation on popular music been submitted yet within English Studies in Spain? I had one in my hands but, you know what it’s like these days, the author had a full time job, a family life… and abandoned it rather than, as he said,’lose coherence’. You don’t know how sorry I am…

Now enjoy:

Comments are very welcome! (Thanks!) Just remember that I check them for spam; it might take a few days for yours to be available. Follow on Twitter the blog updates: @SaraMartinUAB. You may download the yearly volumes from See my publications and activities on my personal web


març 29th, 2016

[This is my 400th post and I want to thank all of you, readers. I feel very embarrassed when someone sends me a message or approaches me with a kind word but it is also a great pleasure. I do hope you also get a little bit of that from reading my raving and ranting. Thanks!!]

One of the most exciting perks of being a teacher is how much one learns from students. Until last January I had no idea who Grant Morrison was and then, suddenly, my doctoral students Angélica and Matteo decided to enlighten me from very different fronts and without even having met. You won’t believe me but I heard the words ‘Grant Morrison’ from their lips on the very same day–serendipity! If you are a comic book lover, I’m sure you must be thinking that my ignorance of Morrison’s oeuvre is simply appalling… and I would agree now that I know that he is one of the greatest new voices in the recent renewal of the superhero universe caused by the ‘British invasion’ (Morrison is a Glaswegian). The Brits, he explains, “dragged superhero comics out of the hands of archivists and sweaty fan boys and into the salons of hipsters. In our hands, the arrogant scientific champions of the Silver Age would be brought to account in a world of shifting realpolitik and imperial expansionist aggression.”

I don’t wish to comment here on Morrison’s long career ( but on his essay Supergods (2011). This is an irregular volume both because there are more accomplished introductions to the superhero but also because Morrison’s self-portrait of the artist is too sketchy. The insights into his own career range from down-to-earth financial aspects to his candid report of an out-of-earth crucial paranormal experience. All this, coming from the horse’s mouth is fascinating but, as I say, not fulfilling enough. Guided by Matteo’s interest in the mythopoesis of the superhero and Angélica’s curiosity about Morrison’s approach to scientific notions of the multiverse, I have, nonetheless, quite enjoyed Supergods.

This is the opening week for Zack Snyder’s blockbuster Superman vs. Batman: Dawn of Justice, a box-office hit despite dismal reviews venting the same twin complaints: why do we take superheroes so seriously?, and why is Snyder’s bleak film not fun? I have not seen the film yet (I find the idea of Ben Affleck as Batman quite repellent) but I should say that we have been taking superheroes seriously at least since Frank Miller published The Dark Knight Returns back in 1986, thirty years ago… Here Morrison can help: “By offering role models whose heroism and transcendent qualities would once have been haloed and clothed in floaty robes, [superheroes] nurtured in me a sense of the cosmic and ineffable that the turgid, dogmatically stupid ‘dad’ religions could never match. I had no need for faith. My gods were real, made of paper and light, and they rolled up into my pocket like a superstring dimension.” As ‘supergods’.

As I have explained to Matteo, I believe that we are still missing a much needed explanation about why Western mythology (including Greek, Roman, Germanic, Nordic, etc.) has resurfaced of all places in the United States. As Morrison writes, “Like jazz and rock ’n’ roll, the superhero is a uniquely American creation. This glorification of strength, health, and simple morality seems born of a corn-fed, plain-talking, fair-minded midwestern sensibility.” Morrison points out that other countries have superheroes (the UK, France, Italy, Japan…) and offers the habitual explanation that Superman appeared as a fantasy aimed to compensate Americans for the ugly daily reality of the Depression Era and the horrors emerging in Nazi Germany. Still, I’m not convinced.

My husband, an habitual reader of comic books, suggests that I should explore the idea that, lacking the medieval tradition of the knight, America invented superheroes (Batman, remember, is the ‘dark knight’). I quite like his thesis but it cannot explain why superpowers were added to the figure of the hero/knight. If you recall, classic heroes were the hybrid sons of couples formed by an ordinary human and a divine individual, hence technically they were demi-gods. I would agree than when the alien Superman lands on the cover of Action Comics in 1938 America generates a new version of the demi-god, though perhaps Morrison exaggerates by claiming that “Superman was Christ, an unkillable champion sent down by his heavenly father (Jor-El) to redeem us by example and teach us how to solve our problems without killing one another.” In view of the dark night of the soul that many superheroes have been going through since Miller re-drew the campy Batman as a brooding Gothic icon, Morrison sounds certainly overoptimistic when he wonders whether the superhero could be “the best current representation of something we all might become, if we allow ourselves to feel worthy of a tomorrow where our best qualities are strong enough to overcome the destructive impulses that seek to undo the human project”. I wish!

Perhaps because of my own atheism I feel far more intrigued by Morrison’s declaration that the superheroes “may have their greatest value in a future where real superhuman beings are searching for role models.” It had never occurred to me that superheroes are a prefiguration of what we call now ‘post-human’ and even ‘transhuman’ yet this is indeed what they are. Think X-Men, above all. Nonetheless, as you can see, my student Matteo will have plenty to do in order to explain why myth has resurfaced specifically in the early 20th century comics published in America and how exactly the mythopoesis of the superhero genre has evolved in the past 80 years. The connection with the post-human scientific paradigm might be the missing element…

This brings me to Angélica’s focus on how Morrison’s awareness of current theoretical physics shapes his narrative style (in case I forgot to say, Morrison is a writer, not an artist/draftsman). Here we face two different questions: one commercial, the other personal, as you will see.

In Morrison’s words: “in place of time, comic-book universes offer something called ‘continuity’”. The many storylines owned by American comic book publisher DC “were slowly bolted together to create a mega-continuity involving multiple parallel worlds” aimed at integrating past periods in the life of re-booted superheroes (as we would say today) and new acquisitions. A singularity of superhero comics–possibly their main defining trait–is that DC and Marvel series have become “eternally recurring soap operas—where everything changed but always wound up in the same place”. The problem of how to prolong ad infinitum a successful character or series was solved, in short, by appealing to the idea of multiple narrative universes. This happened just when “string theory, with its talk of enclosed infinite vaults, its hyperdimensional panoramas of baby universes budding in hyperspace” started theorizing the existence of a multiverse (or our ‘multiversal’ existence). In this way, a plain commercial strategy was given an unexpected philosophical depth (I’m really serious about this–just in case…).

Morrison embodies better than any other current writer in any genre the confluence of the mythical, the mystical and the scientific, with an added in-your-face flaunting of his dabbing in the occult. A turning point in his career happened, he claims, one night in Kathmandu when he had an intense vision, courtesy of what he described as “chrome angels”. This experience introduced him to a new sense of time apprehended not as a linear event but as a total simultaneity “with every single detail having its own part to play in the life cycle of a slowly complexifying, increasingly self-aware super-organism”. Morrison decided to explore this epiphany in his comic books as he tried to find an explanation for his own new superpower, an ability to see a 5-D perspective of objects and of life “as it wormed back from the present moment and forward into the future: a tendril, a branch on this immense, intricately writhing life tree”.

What is most original about Morrison’s neo-Blakean visionary capacity is that, without doubting its reality one iota, he grants that it could be due to a temporal lobe seizure (“would it not be in our own best interests to start pressing this button immediately and as often as we can?”, he proposes), a lung infection that almost killed him, or his massive consumption of a variety of drugs for a long time. Never wavering for an instant, he concludes that “Superhero science has taught me this: Entire universes fit comfortably inside our skulls. (…) The real doorway to the fifth dimension was always right here. Inside. That infinite interior space contains all the divine, the alien, and the unworldly we’ll ever need”. Myth, mysticism and science coalesce, then, in the superhero mystique, at least according to Grant Morrison. And if you’re willing to accept that writing fiction is opening the door to beings coming straight out of the universes in our skulls it all fits. After all, even the gods and God are creations of the human imagination.

I envy Morrison his happy, gleeful fusion of the rational and the irrational and his ability to have turned this exercise in tightrope walking into the very productive foundation of his career. I just simply do not know enough about comic books to test his claim that superheroes are channelling our simultaneous need to a) bring the old gods into a world increasingly sceptical about God, b) maintain our falling ethical standards, c) supply a template for future post-human behaviour, d) connect us with the multiverse and e) inspire us to connect with our inner superhero. A very tall order indeed! I’ll trust Morrison, however, as he seems to know best.

After all, a world with no superheroes sounds, definitely, boring. And I don’t mean that they’re here to simply entertain us (this is just part of their truth) but also to re-connect us with parts of the ‘infinite interior space’ that our trivial daily lives are obscuring. Long live myth!

Comments are very welcome! (Thanks!) Just remember that I check them for spam; it might take a few days for yours to be available. Follow on Twitter the blog updates: @SaraMartinUAB. You may download the yearly volumes from See my publications and activities on my personal web