THE BODY AND THE BODIES: AGAINST UNIVERSALISM (IN AFFECT THEORY AND IN HORROR FILM)

I have read with great pleasure Xavier Aldana Reyes’s new volume, Horror Film and Affect: Towards a Corporeal Model of Viewership (Routledge). I’m very proud to see how he is fast progressing into being a truly first-rank academic, as it is always delightful for a teacher to see how someone who used to sit in her classes is now producing such excellent academic work. As happens, my review of Xavi’s previous book, Body Horror, has just been published, long after I actually wrote (see: http://atlantisjournal.org/index.php/atlantis/article/view/359). This time, however, rather than write a review (the book is outstanding, believe me) I’ll delve into my disagreements with Xavi’s theorisation of the body in Horror (capitalized to mean the genre).

The model of viewership he offers is not just corporeal but, to be precise, corporeal-affective. As he puts it, “corporeal threat is by far the most common affective experience in Horror and (…) a more rounded understanding of how Horror seeks to make the viewer experience fear is necessary” (16). Certainly, I couldn’t agree more: basically, when we see Horror films, we fear for our bodies because, as Xavi (sorry, I can’t call him Aldana Reyes) argues very well, we connect with the threatened bodies on the screen through ‘somatic empathy’ (itself generated by ‘sensation mimicry’). We do not really feel identified with the characters (actually in many cases we may feel no sympathy at all) but we appear to feel with our bodies, as if these were their bodies. Correcting a generalized impression, Xavi argues that the Horror film viewer is not a sadist but a masochist, although here’s my first disagreement: hearing viewers very vocally demand that this or that is perpetrated on a victim’s body in the cinemas of Sitges during the popular fantastic film festival, I doubt very much that all viewers are masochists. Let’s suppose for the sake of argumentation that these bullies are just 5% of the audience and that they are not the intended target audience of the filmmakers. Even so, the risk exists to activate affects that go in the sadistic direction.

Sorry, but I know besides that this kind of reaction is much more common among men when watching female bodies under attack in Horror films than among women seeing male bodies destroyed (women are generally more empathetic). Xavi’s volume, however, is very clear regarding why he will not consider gender: “Arguing for the continued need for studies that highlight gendered representations, I propose instead that the body in Horror, as far as its affective powers are concerned (and here I mean their capacity to scare and horrify, not to titillate sexually, which is not a general intention of most Horror) is largely ungendered” (16). But how do you separate the horror from the sexual titillation, if only in that hypothetical 5%? I absolutely agree with Xavi that the feminist/psychoanalytical approach used by Barbara Creed in her seminal The Monstrous Feminine (1993)–based on Julia Kristeva’s notions of the abject as discussed in Powers of Horror (1982)–is not useful to illuminate how Horror film works. I praise Xavi for his demolition job and for cutting the Gordian knot: very obviously, when you’re seeing films like The Thing the fear you experience has nothing to do with “the primacy of the maternal body as principal guide or indicator of abjection” (29). It has everything to do, in contrast, with the state of special effects in the year that film was made, 1982, and with director John Carpenter’s skills in mixing image, sound, music, etc. Xavi, therefore, proposes that we liberate Kristeva’s abjection from the “psychoanalytical remit” (44), and re-conceptualize it as ‘fearful disgust’, which can be felt by any human being –any body.

Thus, he observes: “Because my approach entails a de-gendering of images of abjection, this means that the nature of affect needs to be theorised regardless of the gender or sexuality of the viewer and characters, and rather in terms of viewers’ acquaintance, tolerance and enjoyment of images of abjection” (71). Fair enough. Or is it? Accepting the importance of the cultural factors associated with the production and enjoyment of Horror films, but refusing to produce sociological analysis, Xavi stresses that his study is “theoretical and wishes to look at the way Horror ideally affects viewers” (98, my emphasis). This is where I begin to object, and quite strongly.

Affect Theory cannot be pinned down with precision for it is rather quite a heterogeneous collection of conflicting currents. However, at the core of the area there seems to be a staunch belief that the universal body exists in the same way the body exists for Medicine. This is propounded on the basis of the neuro-scientific foundation on which Affect Theory rests. I have, however, very serious doubts that the body exists in the sense intended here.

Surely, we are all one singular body and at the same time part of the universal body, a construction without which Medicine could not work. This science relies on the assumption that all human bodies function in exactly the same way, which is why, naturally, its techniques (from medication to surgery) are universally valid. That must be also the reason why every time I visit a new doctor and see them look at my body without caring who I am, I feel so confused. Anyway, I digress. Affect Theory, and generally speaking, neurology and the ever expanding neuro-sciences, are also applying the universalist view to the delicate connection between brain and mind. You can see by my recent posts that the study of this connection is slowly creeping into the Humanities, with, arguably, little resistance. This is, I believe, due to our low self-esteem and to the generalized belief that ‘scientists know best’. This new fashion is, however something that I dispute. As doctors know, Medicine is not mathematics and bodies respond differently both to disease and to treatment. If the condition of your heart and your clogging arteries is cultural (i.e. directly connected to your consumption of the toxic food on offer in your society), what is the ground to believe that affect is not also conditioned by culture? Meaning, in short, that I don’t believe that a theoretical model of corporeal-affective viewership can ignore particular bodies.

This is not, by the way, sociology but Cultural Studies and, in particular, Reception Studies (and Theory, if you wish). As Xavi points out, one inconvenient of studying Horror film viewers in a laboratory situation is that the ‘artificial’ environment conditions their response. Fair enough: visit the Sitges film festival. There you’ll notice a few interesting things. One is the age of audiences–you always find veterans who never lose their taste for Horror but the viewers are predominantly young (16-35). Also, let’s be frank about this, of the type colloquially called ‘nerd’, which, yes, does call for some kind of sociological study. Among them, the presence of girls has been growing in recent decades and it is now not much lower than the presence of boys. Young women are certainly enjoying Horror films in a way unthinkable for, say, the generation born in the 1940s; you certainly don’t see groups of nattily dressed elderly ladies queuing at the Sitges cinemas. If you asked the viewers why they enjoy Horror, you would get many incoherent answers, which is why academic theorisation is absolutely necessary. In this sense, my impression is that curiosity possibly plays a bigger role than we assume. Without leaving culture and personal identity aside at all, quite the opposite. Much less gender.

As I read Horror Film and Affect, I found myself considering whether I wanted to see some of the most extreme films analyzed there. As I have already noted here, Eli Roth’s Hostel (2005) meant the end of my interest in Gothic Studies, as my ‘somatic empathy’ was too high to allow for any kind of enjoyment (also I totally rejected being academically complicit in the success of a film based on torture). Yet, reading about Pascal Laugier’s Martyrs (2008), which appears to be far more graphic in its depiction of torture than Hostel, I felt dominated by curiosity (academic or personal, I’m not sure). Funnily, my husband already had this notorious film in his list of Horror films to see. I don’t think, however, that my curiosity will overcome my somatic empathy. I’ll rely, then, on his report…

Now, this somatic empathy is an emotion provoked by the affects that participate in Horror films’ effectiveness but also a cultural factor–a crucial one. It is what has made us reject the use of (legal?) torture universally, beyond our beliefs in human rights. It turns out, in the end, that the ideal body that enjoys seeing Horror film is by no means universal: not only because somatic empathy is absolutely personal (as personal as the taste for, I don’t know, strawberry ice cream) but also because Horror film itself is the product of particular cultures. Yes, I respond to the startle effects (or shocks) of Japanese Horror cinema but its codes are very alien to me–and I wish I had never seen Audition. I marvel that my favourite startle effect (the face hugger jumping out of the egg in Alien) works every time I see it, whether I see the complete film or the isolated scene. But fancy showing that to a member of an Amazonian tribe who has never seen a Horror film. Yes, she would be shocked, but not at all in the same way I am–her shock might be massive or she would burst laughing, I’m not sure.

Sorry to use my own personal experience, but it’s the only one I know. I love Horror films which suggest there is something else beyond humanity, whether this is the Devil or an alien monster, a supernatural or a natural threat. I realize, however, that I feel increasingly repelled by the Horror films in which evil is caused by a sadistic person–usually a man. When I presented my first paper ever, back in 1994, this dealt with Clarice Starling in The Silence of the Lambs (1991), technically a thriller and not Horror, as the victims are killed off-screen (yes, Xavi, I agree). The feminists in the audience were horrified that I had enjoyed a film in which women were so savagely victimized; but, of course, the question was that you could not see this and I was (still am) fascinated by Clarice. As Hannibal Lecter moved, on, however, the franchise lost all its appeal for me, since it became a series about a guy hurting people. And I reject this… particularly if the victim is a woman. I have walked out of a cinema only once, my body totally overwhelmed by Steven Spielberg’s ultra-realistic depiction of the Normandy landing in Saving Private Ryan. I have, however, left my husband alone on the sofa countless times whenever the Horror film we had chosen to see together eventually focused on cruelty against female bodies. Xavi avoids the issue of rape, which, as any woman will tell you, does make you very much aware that you’re a different kind of viewer from a man. No way I could watch Gaspar Noé’s Irreversible (2002), which might not be even Horror for a male viewer but is certainly Horror for a female one.

Let me focus on pain to finish. Xavi points out that “pain is normally either cast out or eradicated from public view” (176) and I’m wondering very seriously whether the fast advances in the special effects in Horror films (and in general in any film in which bodies are destroyed) has to do with this. I was watching a documentary on the Holy Grail on TV which argued that Saint Lawrence might have brought the relic with him to Huesca. The churches in this city abound in images of his martyrdom: the poor guy was… grilled. In public. Not only martyrdom but also other public events of bodily destruction come to mind: the spectacle provided by Roman arenas, Medieval executions (think William Wallace…). We have hidden the public spectacle of the broken body out of sight, as we hide disease and even surgery (how does a surgeon react to contemporary Horror film effects, I wonder?). And it might well be that, like Saint Thomas, we need to see in order to believe. We hear torture victims describe their ordeal and automatically we ask ourselves ‘but what was it really like?’. If porn satisfies our curiosity about how people engage in sex, then 21st century Horror film most likely satisfies a similar curiosity about how bodies are broken in pain. I’m writing this on the day yet another terrorist attack (this time in Istanbul’s airport) has caused a terrible massacre–bodies unseen on TV. The more we fear pain, in short, the more we need to face it vicariously and this is the urge that Horror film is satisfying. If you are already in pain or if this curiosity has never arisen, or is already satisfied, then there is no need for Horror movies.

I have many other questions to ask: after how many Horror films does an aficionado start losing the edge?; are the affects generated by Horror film different depending on the situation in which the viewer is placed? (alone/in company, at home/in a cinema, at night/at day); how do actors feel seeing their bodies used in this horrific way?; who provides the main innovations: directors or FX artists?; what about sound and music?

A theory, logically, is a proposition (a hypothesis) that needs to be tested and Horror Film and Affect is transparent about this: the volume is an invitation to go and ask. Forget psychoanalysis, ask filmmakers and everyone involved in Horror films how they pull the strings (and who they’re thinking of when they envision their terrible images). I’m sure that the more we ask, the more blurred the universal body will become and the more visible particular bodies will be.

Comments are very welcome! (Thanks!) Just remember that I check them for spam; it might take a few days for yours to be available. Follow on Twitter the blog updates: @SaraMartinUAB. You may download the yearly volumes from http://ddd.uab.cat/record/116328. See my publications and activities on my personal web http://gent.uab.cat/saramartinalegre/

THE INCREDIBLY SHRINKING UNITED KINGDOM: ON BREXIT

This is a time-capsule post, of the kind that gets written with the author expecting to check in five-years time what really happened.

Like many people all over the world–as shown by the instantaneous collapse of the stock market–I expected Britons to have voted in favour of staying in the European Union. This is a people known (unless until today) for their pragmatism and common sense. Clearly, though, many things have gone the wrong way and we’re witnessing today, with a sour taste in our mouths, the success of the Brexit campaign. 24 June 2016 has been hailed by The Sun and by UKIP leader Nigel Farage as ‘independence day’… You know that something is bad when, in addition to this absurdity, Donald Trump claims that this is good (for the time capsule: Trump, who failed to be elected President of the United States by a landslide in 2016). ‘May you live in interesting times’, the Chinese curse goes.

I have titled this post ‘The incredibly shrinking United Kingdom’ because I see the UK even further diminished in its global position by this strange manoeuvre. Let me get a couple of ideas out of the way before I continue. To begin with, someone should change the rules of referendums, for in the end the percentual difference between the two options has been quite small: 52% to 48%. This is not a clear victory but a divided country. Now, if Brexit goes horribly wrong and throws the UK into the waste-basket of History (a phrase often used in Catalonia in the last year), can the overruled 48% demand any responsibility from the other 52%? Obviously not. This is why this kind of potentially very dangerous decision should be made by a much wider difference, at least 65/35. You need to be sure that your victory (or defeat) is final and this is not at all the case today. Second point: Brexit is, no doubt about it, a clear sign of the European Union’s failure to constitute itself as little more than a commercial union. I cannot imagine Donald Trump celebrating the secession of, say, California; the fact that he’s toasting today to Brexit means that the EU is not at all a union, as the United States are. I would not like as a Catalan to be left outside the EU, which is one of my main doubts considering a possible independent Catalonia. At the same time, the EU has utterly and completely failed to inspire in us, Europeans, the feeling that this is what we are, in the same way Californians feel that they are American.

I’ll add a third point: that the UK leaves the EU is particularly poignant because, of course, it was one of its founding members back in 1957, when it was born as the European Economic Community–a name bearing all the seeds of trouble to come. British disaffection for Europe is an extremely complex issue, which many others have analyzed with better, finer tools. Nonetheless, this disaffection has its roots in the perception that the UK is contributing more than it is getting out of the EU; European solidarity is based, after all, on the idea that the richer states must help the poorer ones, which is why Brexit will certainly be a terrible blow for us, in Spain. Now, this suggests that Germany should be happier than any other nation to abandon the EU but the Germans do see that the union is needed if only because, let’s be clear about this, cheap labour is to be found in its southern and eastern areas. The Britons are right now too blinded by an oddly euphoric chauvinism that won’t let them see that European migration is not the problem but the solution to their economy. I’m aware that much of Brexit has to do with Britain’s wish to decide for herself which migrants to admits to its shores but the vision of an all-British workforce is not only treacherous but also downright silly.

If we accept the argument that staying in the EU brings more economic benefits than staying out of it –and I think this is a powerful argument because the previous arrangement of nations in Europe led to WWI and WWII– then we need to wonder what is being pursued with Brexit. It is not impossible to think of a future scenario in which Scotland will be an independent nation and a member of EU, and in which Northern Ireland might be unified with Ireland for the same reason (or Gibraltar decide to return to Spain). There is, then, a very real danger of national dismemberment with even England/Wales being sharply split between pro-EU London and the rest. How British (English?) economy can thrive even supposing the UK’s split is prevented is beyond me. Norway is doing fine on its own without being a EU member (and so is Switzerland). However, part of their success has to do with their being very realistic about which role in the world they want to play: a marginal one (at least politically, I would not say the same about Switzerland and world finances). Oddly, Brexit supporters dream of a Britain which is not only free from EU restrictions (so they claim) but also a powerful nation in the world. Like in Victorian times.

This is the way in which Britain is shrinking: it has lost track of its dwindling importance both within Europe and in the world. In Spain we’ve gone through that: we used to be the biggest Empire in the world, remember? Being rich didn’t suit our (or rather, the Castilian) temper very well, which is why we went downhill all the way into bankruptcy and even invasion by Napoleon. A series of independentist uprisings eroded little by little the Empire until this was finished off in 1898 by the United States pushing us out of Cuba. The Civil War (1936-9) happened when the Republic was getting Spain used to the idea that we should be a modern European country rather than an ex-Empire. Yet the band of ultra right-wing nostalgics headed by Franco fought its way into what the Brexit campaigners now want: autarchy (or total self-rule). I do know that the parallelism between backward, isolated Spain, which only joined the EU exactly 30 years ago, and Britain does not hold. Yet the lesson we learned after Franco is that imperial glory will not feed people; we very humbly accepted the crumbles at the table of the rich EU, briefly believing before the 2008 crisis that we were finally one of the diners. The UK went through its worst economic crisis back in the 1970s and that 52% who have voted ‘out’ today seem to feel confident enough that, no matter what, they will stand on their feet and do fantastically well in terms of economics, politics and general prestige. As an English Studies specialist I can only call this position neo-Victorian.

This is, naturally, an extremely false position to be in. Whereas those who want to stay in the EU have given a long list of reasons why leaving it would be negative, the Brexit campaigners have given no truly valid reason to leave the EU, other than wounded pride. Most likely, they imagined a Europe in which the UK would be the leading country and cannot simply accept that the leader is Germany, the hated enemy of the past. Somehow, they have managed to convince themselves that the United States will play a crucial role in this post-Brexit British Renaissance, even though President Obama warned Britons against Brexit. As a Catalan I feel that the path taken is even much more uncertain than independence. Supposing the Scots voted to leave the UK, they should be doing so in the hopes that they would do much better on their own, including the possibility of joining the EU. But the UK has not voted for independence, no matter what UKIP says, but for isolation, which is a completely different matter. Britain was isolated in Victorian times, in the sense that it did not belong to any other international association, and was extremely powerful. Now the same solitary status wants to be recovered. But, then, what’s next? Leaving the United Nations?

I’m flabbergasted–that’s the word I was looking for. I simply don’t understand how a civilized nation can make this very obvious (right-wing) mistake in the 21st century. It must be the influence of so much SF but I always imagined the world converging eventually into a world-wide federation, Star-Trek style. What I wonder today is not why the Britons (well, 52% of the 75% voters, that is to say, 34% of Britons over 18) want to leave the UK but where they think they are going.

Among the many questions about the future of the EU I heard this morning on TV, here’s the one closest home, as an English Studies academic: will English still be an official European language? That’s a good one… Everyone, start learning French and German as fast as you can…

Comments are very welcome! (Thanks!) Just remember that I check them for spam; it might take a few days for yours to be available. Follow on Twitter the blog updates: @SaraMartinUAB. You may download the yearly volumes from http://ddd.uab.cat/record/116328. See my publications and activities on my personal web http://gent.uab.cat/saramartinalegre/

RE-INVENTING EXAMS: AN EXPERIENCE

It’s June and these days we’re also busy marking exams. We’re also busy wondering why we give our students exams and what use they are (the exams, not the students!). What use assessment is, in fact. I have just entered the final marks for the course I have taught this semester and they are exactly the same marks I would have awarded each student one week after meeting them. Funnily, their marks did not depend on just a final exam but on four different items, with their corresponding percentages, etc, all that requiring Excel to be worked out… I don’t know what this says: that assessment only validates subjective impressions, that assessment that does not rate the exercises but the person, that I am such an experienced teacher that I know at first sight how students will perform (ehem!), that I could have saved myself a lot of hard work… Take your pick.

This was an elective course and I always prefer for assessing this type of course a paper rather than exams. This time, however, I decided to use exams for assessment, apart from a short essay written at home and a class presentation. I hated exams as a student and do not particularly like them as a teacher. One of my colleagues claims that we should never ask students to write papers, for they plagiarize all the time–which is an exaggeration… though also a constant fact in teachers’ lives. My position is quite the opposite: I do not see any equivalent situation in real life in which people would have to write a piece of academic work in a tightly limited time. I associate this, rather, with journalism and newspaper’s daily deadlines. Otherwise, why would anyone produce a piece on Wordsworth’s poetry, to name the first case that comes to my mind, in a very short time? It’s simply ridiculous. I’m rather of the persuasion, then, that exams only measure people’s ability to take exams. Or negative ability–I always performed well but only after bouts of nausea and vomiting that did nothing for my faith in the use of exams. I must have passed the last one in my doctoral days (I’m not counting vivas or oral examinations for tenure) and that surely was a happy day.

Accordingly, I play all kinds of tricks if I can manage to try to deconstruct exams. I believe that good academic work requires a reasonable time of preparation (not just of cramming) and I’m known to have given my students the exam questions in advance. I don’t care very much for failing students and I find that the students who fail in my courses usually trip themselves up by not handing in exercises or not taking the exams. If I get the chance, however, to help my students to do well enough for me to pass them, I’m happy. This is not the same as saying that no matter how they perform I’ll pass them, but rather that I don’t want anyone vomiting before taking one of my exams. I just want them to have studied and, above all, to have planned their exam question at home. I found out, however, that students given the exam questions in advance got quite nervous for a reason I failed to anticipate: if you know the questions in advance then a good deal of the justification to fail vanishes. Who would have thought that Prof. Martín would ask such a devious question? That seems to be the kind of thinking that comforts students that do poorly. Now, if Prof. Martín puts her questions in your hands, thus eliminating the surprise factor (not necessarily the deviousness), that’s another matter. Your inability to plan the answers is highlighted, which is, let’s be honest, much more embarrassing that simply being unable to answer a question you could never have anticipated.

You might argue that surprise is the whole point of exams and the target that collective groan you can hear when students find the questions too hard. This just happens to be a sound I do not enjoy (my exams, in contrast, seem to be the source of much sighing…). Anyway, what happened when I gave students an exam to take home, consider and plan is that still a few failed. I certainly felt less responsible for their failing, if you know what I mean. What I noticed was that the effort done in the actual writing in class was similar: the same stream of sighs, the same flushed faces and always the lack of time (some students would run out of time even if given five hours instead of two, it seems). The pressure had eased, the quality increased, hopefully there had been no vomiting, but, then, it was still an exam written by Prof. Martín.

For my latest course, I have tried another tactic: have the students write their own exam. In hindsight, I realize that no exam questions could ever match the deviousness of this proposal but let me say that I was not acting wickedly but in good faith. The group was small, only 15 students, and I explained that they should write a two-question exam using our habitual Department format: select a passage from the book we’ve studied (maximum 10 lines) and ask a question that can be developed in a short argumentative essay (maximum 500 words), referring both to the passage and to the book in question. It was clear to me that students would be very uncomfortable if I didn’t check their questions, so I gave myself the task of validating each exam a few days before the corresponding exam date. What I found is that students wrote, on the whole, perfectly valid questions just badly phrased. Some of the questions were simply too big in scope for a short essay but could mostly be re-used; others came multiplied by two or three (students seemed insecure about which version to use). None of the questions was insultingly easy to answer, and here’s where I noticed my own deviousness.

Imagine my students telling their peers in other courses ‘Sara has allowed us to write our own exam questions’. The answer from said peers should be ‘My!, you’re lucky, now you can’t fail!’. Now, most of my students probably replied at this point what one of them told me: ‘No way! This is the hardest thing I’ve done in my life!’ Why? Because they quickly realized that my proposal to let them write their own exams would not result in easier exams–no way I would validate shallow questions. Therefore, they had a twofold task: produce the kind of exam I would write myself and do it so that they could secure a pass. Lacking feedback from them (I have asked, and I’m waiting) I can only surmise that they told themselves: ‘Ok, so I need to make things as easy as possible for me while writing the exam as if I were a teacher’, demanding Sara Martín, in particular. Sure–only I had not gone that far in my own thinking when I proposed the experiment.

Exam one went well: nobody failed, though I believe that nobody performed either at a higher level than if I had written the exam myself. My guess (I need the feedback) is that students were more relaxed and confident about what they were doing, having got the annoying surprise element out of the equation. My colleagues say that written exams have the added bonus of offering exact information about the actual command of English each student has. Maybe. By giving students the questions or asking them to write their own questions, I also expect them to work on their English at home and produce far more polished exams (much easier to correct for me, too!). I don’t know how they do this: if it were up to me, I would write the answers at home (using the dictionary, etc, etc), try to memorize as much as I could and then write them in class. It might well be, however, that they have memorized outlines, I don’t know. I’m sure, though, that many language doubts and errors could be ironed out at home. This is fine by me for in this way they had to learn some English language in order to prepare the exam, in addition to the English Literature.

For the second exam I asked students to produce questions combining both a passage from the primary source and from a secondary source. I’m not sure whether this was my fault or not, but it seems that my instructions were a bit ambiguous about which secondary source to use. I had asked students to read one article for each novel and I expected they would use the ones I had selected for them. However, some students just chose other articles, which was a bit complicated to negotiate. I validated their exams eventually. What I found, and this was both silly and funny, was that the most complicated thing to do was validating the exams in which my own article was quoted. I sensed a kind of mutual embarrassment: students seemed to feel a bit awkward writing ‘As Martín claims’, which was not the case when they wrote ‘As Vint claims’, or ‘As Frelik claims’, for they didn’t know these academics personally. On my side, I found myself disagreeing with how students read my own article, even though their questions were perfectly valid. It felt very, very strange to be ‘Martín’ rather than ‘Sara’, as my students call me in our informal Department.

In both exams the contents reflected very accurately was what being discussed in class. The questions did refer with no significant exceptions to the issues we had discussed together though the passages chosen were not necessarily the ones I had selected for class discussion. The exams were, in short, more personal and less ‘parasitical’ of class discussion than I expected (this was my main fear). Some exams, particularly in the second series, were actually quite sophisticated. When marking them, I often marvelled that students who knew nothing about SF a few months back were confidently discussing artificial intelligence, genetic manipulation or post-humanism. Happy, then, as far as I’m concerned.

I failed, however, in just one thing. I decided to use exams because I wanted students to read the five novels in the course–if they wrote a paper, then they might read just one or two. What I failed to notice is that the second exam, covering three novels, should be much longer, perhaps two hours and a half, rather than one and a half. And we simply don’t have that kind of time. In old times exams had a separate schedule, apart from teaching time. One of my own teachers was famous for giving us exams that could run for four hours or more. Since 2009, however, exams are part of our teaching time, which means that the more exams you introduce the less time you have for actual teaching; also that they need to fit our 90-minute slots. Either I introduced a third exam, or I let students choose two of the three novels for the second exam, which is what I finally did.

Was the experiment worth carrying out, then? Certainly. I think that the class size was ideal, as validating the questions–not an easy task!–could be done in a reasonably short time. Also, these were fourth year students. I don’t see myself repeating the experiment with second-year students in my Victorian Literature course because a) they would panic, b) at about 50, the group s too big and validating the exams would consume too much time and energy. I believe that my experiment in tailor-made examination shows that asking everyone the same question is a bit of an absurdity for each student is motivated by different things in the same book. What we do when we ask all students in our class the same question, then, is just something convenient. We tend to ignore the fact that, beyond what each student has studied in preparation for the exam, some will automatically do well (or badly) because of the nature of the particular questions. I’m talking about Literature, not mathematics, of course…

Now, I would be really glad to get feedback from my students…

Comments are very welcome! (Thanks!) Just remember that I check them for spam; it might take a few days for yours to be available. Follow on Twitter the blog updates: @SaraMartinUAB. You may download the yearly volumes from http://ddd.uab.cat/record/116328. See my publications and activities on my personal web http://gent.uab.cat/saramartinalegre/

GONE LOVE: FROM ELIZABETH BENNET TO AMY DUNNE

I assumed that there would be already a handful of academic articles on Gillian Flynn’s 2012 best-selling novel Gone Girl, adapted for the screen by David Fincher in 2014 from a script by the author herself. Not at all. My university’s meta-searcher, Trobador, has returned 712 results, only 2 of which appear in the MLA database–both turned out to be journalistic pieces (a third piece is a biography of Flynn). Cultural Studies has always been accused of being merely academic journalism, too concerned with the contemporary–with the transient and the banal in the worst cases. I’m sure that pieces about the Spice Girls read now as terribly dated and as obscure as the analysis of a 16th century villanelle, yet I do believe that academics should react promptly to their surroundings and I’m quite surprised that Gone Girl has escaped our collective radar. Perhaps it’s just too early and a flood of articles on Flynn’s novel are now going through the sluggish process of peer reviewing… After all, I decided to read this novel (not quite my cup of tea…) only after listening to my UB colleague Cristina Alsina deliver an excellent paper during a recent seminar on crime fiction and the family, so there you are.

June 2016 is, of course, too late to review a book published four years ago and this is not what I intend to do here: I wish, rather, to consider why the trope of love and marriage is receiving such a degrading treatment in contemporary fiction. For this is the case in Gone Girl to an extent that is simply painful to read.

One of the best books I have read on romance is David Shumway’s Modern Love: Romance, Intimacy, and the Marriage Crisis (2003), which I discovered only after the author finished a three-month stay in my Department, during which I found it impossible to connect with him at all… Shumway explains with great lucidity that what is destroying long relationships–at least in US society–is women’s constant demand for total intimacy, inspired by the traditional romantic discourse originating in fiction. This may sound misogynistic in my simplified summary but it is not at all: Shumway is also adamant about American men’s inability to respond to that demand for intimacy if only at a very basic level.

The main contribution, I believe, that Gone Girl makes to the fictional representation of love and marriage is turning intimacy into the most brutal form of hell one can imagine, a life sentence. The romantic ideology that women have incorporated into their relationships with men, Flynn argues, is fundamentally psychopathic, as her heroine Amy proves; men, like Amy’s average husband Nick, try desperately to avoid needy women for, understandably, the stress of responding to a constant demand to be loved and to be fully understood is impossible to sustain. The joke in Gone Girl, however, is that Nick tries to escape his wife’s extreme ideas of what constitutes a successful marriage only to find that his young mistress is as needy as Amy (though, of course, sexually more pliant). Ironically, Flynn does give Nick a perfect relationship with a woman, his twin Margo, perhaps suggesting that being siblings (free from “twincest”) is preferable for men and women to being married.

If you pare it down to a basic plot line, Gone Girl deals with the extreme measures to which a wife may resource in order to keep her cheating husband. Divorce is simply impossible because neither Nick nor Amy can go back to being the autonomous person they were before they met (if that’s what they were). Second lesson about modern love, then: marriage annuls the capacity to be yourself, as many recently divorced people find out. Gone Girl is a horrifying novel, then, not just for how far Amy takes her radically sick romantic ideology (and for how Nick eventually responds to it), but for what it tells about marriage, particularly in American life and fiction. The acknowledgements turn Gone Girl, besides, into an even more bizarre product if that were possible, since Flynn thanks with total enthusiasm her husband for his support (and for having married her!). If I were her husband I would, however, worry: Flynn’s grim novel was intended, as the author declared, to make couples consider each other with suspicion and wonder ‘who are you?’ Now: how does this connect with the author’s own happy marriage, I wonder?

Why, then, the insistence on the marriage plot if all couples we know are actually unhappy? There seems to be a kind of circularity at work: the romantic idea of the happy marriage for life was constructed by the first novels and now the novel is deconstructing it. This, of course, is based on a misreading of the original fictional romance. Take Pride and Prejudice (1813) and you will see that Elizabeth and Darcy are surrounded by unhappy married couples, beginning with her parents, the Bennets. Actually Austen plays a peculiar conjuring trick by placing the disagreements that eventually surface in most marriages at the beginning of the relationship between her heroine and hero. The fantasy of the happy-ever-after ending consists of supposing that in some magic way Elizabeth will avoid the pitfalls of her parents’ union as she’s marrying a man who shares her own idea of intimacy. In the 2005 film version of Pride and Prejudice with Keira Knightley as Elizabeth, English actress Rosamund Pike plays her demure elder sister Jane, a spider-woman patiently spinning her web to catch rich bachelor Charles Bingley (he and not Darcy is the “single man in possession of a good fortune” and “in want of a wife”). Pike also plays the patrician Amy Dunne in Fincher’s adaptation, which links Austen and Flynn very conveniently for my argumentation here. Jane and Amy, after all, are not so different in wanting a nice husband but the intervening 200 years have turned this aspiration into something aberrant.

The pathology of the good marriage is extended in Gone Girl to Amy’s parents, a couple of soul mates, as Amy describes them, who have cannibalized their daughter’s childhood as material for a successful series of children’s books, Amazing Amy (later, Amy calls her mendacious memoirs Amazing). Rand and Maryelizabeth Elliot appear to be truly committed to each other but also quite phony, no doubt because Amy, an only child, hates them for exploiting her economically and emotionally. If, then, the happy, long-lasting marriage is, as Amy claims, damaging for the children because it sets high romantic standards impossible to fulfil, than what should be the target for couples? Since Flynn’s own happy private life contrasts so sharply with that of her heroine Amy, or so she claims, perhaps what we are witnessing is not so much the degradation of the marriage ideal, as it exists in actual social practice, but the inability of current (American?) fiction to narrate happiness. Even worse, rather than plain unhappiness women novelists are offering a monstrously false form of happiness, twisted beyond all recognition. On the other side of the Atlantic, incidentally, Glynn’s British peer E.L. James has refashioned happiness as sado-masochism in her Grey series, perhaps concurring more than we imagine with her American colleague.

Initially, I was going to use Michael Kimmel’s controversial Guyland: The Perilous World Where Boys Become Men (2009) to comment on the current degradation of the romantic discourse. The need for mutual seduction, Kimmel hints, is vanishing, replaced by a predatory view of sex common among American young persons aged 18 to 25. I realized, however, that this is not yet the generation that Flynn is addressing but rather that of the thirtysomethings in need of abandoning guyland (and girlland). Flynn is suggesting in Gone Girl that marriage has become a fiction which both members of the couple embrace when the hedonistic lifestyle of the twentysomethings runs its course and individuals start feeling a vague need to settle down. The biggest gap in her novel is not connected, ultimately, with Amy’s improbable masterminding of her elaborate final trap for Nick but with how and why Amy and Nick fall into the marriage trap of their own volition.

Amy offers quite a good diagnosis regarding Nick’s self-deception: he falls for the ‘cool girl’ which she so proficiently impersonates. Yet since plain Nick is so easy to read for Amy, I wonder why she targets him as the object of the obsessive marriage plot which she builds for both. In Flynn’s reading, indeed, marriage is a piece of fiction which we women write throughout our lives with men in secondary roles and if Amy is exceptional this is because she writes two versions: the one her diary captures, intended paradoxically for public consumption, and the private one she forces on Nick. Since Flynn makes Amy so exceptionally abusive, readers–like myself–who resist reading women’s fiction about violent women may miss her main point: the idea that men are also addicted to the trashy marriage plot that women churn out (Fincher’s film, in contrast, simply places Amy in the long line of dangerous blondes, a classic femme fatale, though married).

A reader called Gone Girl the story of “a jerk and a bitch” and even though Mary Elizabeth Braddon’s sensation masterpiece Lady Audley’s Secret (1862) is there to remind us that mean, scheming wives have not been invented by Flynn, it is still shocking to see that Amy and Nick are presented as an extreme instance of modern love. Try, if you can, to call Pride and Prejudice the story of “a jerk and a bitch” and you will understand at once what I mean by the contemporary degradation of the romantic discourse on love and marriage. Or to put it the other way round: while I very much dislike Austen’s novel as a dangerous fantasy about men’s willing submission to women through love, I am appalled by Flynn’s nasty little tale for, instead of denying Austen’s daydream it claims that it works–on the basis of the most atrocious mutual dependence you may imagine. Never mind that this is not love at all as it should be felt, for it is love as we have been told it should feel by countless romantic novels.

Not the girl but love is gone, burnt out by the cynicism dictating that sentimentalism is tacky. Amy never tricks herself that she and Nick are like Elizabeth and Darcy yet she builds her own repulsive marriage plot because she does want that fiction to be her life (and Nick’s). This is both cynical and desperate, a decision born of her realization that modern love is hollow at the core and intimacy two-edged, for there are things you might not want to know about your spouse. Is Gone Girl, then, a good novel that will stand the test of time like Pride and Prejudice? Not at all but it is fascinating as a symptom of the malaise that is rotting the marriage plot in fiction and in life from the inside (no wonder that Flynn’s background is also the economic decadence of Amy and Nick’s America).

This malaise is not, however, as Flynn might believe, limited to her pathetic couple: her novel participates of the terrifyingly decadent imagination surfacing all over American fiction (think The Hunger Games). What made me gasp for fresh air when I closed the book was not Amy’s wickedness but the sheer ugliness of Flynn’s fabulation, the hours spent plotting the hideous details of how Amy plots her own life. Bret Easton Ellis did the same, and arguably even with more bravado, for contemporary American men 25 years ago in American Psycho (1991). It has taken American fiction, then, a quarter of a century to present us with the female version. To many people this might read as a healthy exercise in female sincerity, even as a feminist step forward. For me, however, the decision to focus on a psychopathic woman as an instance of the psychopathologies of a whole society is a setback, for I still believe that for a society to progress its fiction needs to provide it with role models. The kind of urge that Amy, the anti-role model, satisfies is a luxury that women cannot afford in our patriarchal times–I know the argument is not new but think of Hillary Clinton and now think of how little she needs Amy Dunne to exist.

I’ll end, then, by bemoaning the way love has been dealt with in fiction: it deserves better than the implausible plotting that both Jane Austen and Gillian Flynn give it for drastically different reasons. I wish both Elizabeth Bennet and Amy Dunne were gone girls but I’m sorry to see that only love is gone.

Comments are very welcome! (Thanks!) Just remember that I check them for spam; it might take a few days for yours to be available. Follow on Twitter the blog updates: @SaraMartinUAB. You may download the yearly volumes from http://ddd.uab.cat/record/116328. See my publications and activities on my personal web http://gent.uab.cat/saramartinalegre/

THE BIOLOGY OF CREATIVITY: A SECOND APPROACH

I published a post back on 26 April in which I quoted from an interview with American neurologist Alice Weaver Flaherty, author of the book The Midnight Disease (2004), an essay on neurology and literary creativity. I have read now her volume and although I do not wish to offer here a formal review I certainly want to consider (or re-consider) a few ideas based on Flaherty’s claims. I do not hesitate to recommend The Midnight Disease not so much for the rotundity of its arguments but for their many flaws, as they offer plenty of food for thought.

By the way, Flaherty finds it necessary to justify why she has written a book despite being a scientist, since her colleagues communicate with each other by publishing papers. “A melancholy fact,” she writes, “is that in the sciences, the book has become as marginal a literary form as the sestina or the villanelle”. Torn between her impression that academic books will soon disappear under pressure of online paper publication and her need to narrate “an unusual personal experience” (the sad death of her two premature twin boys), Flaherty tells supercilious scientists that “writing this book was something I could not stop myself from doing”.

Flaherty, who has always written, went through a very serious post-partum depression which manifested itself, among other symptoms, through ‘hypergraphia’, “the medical term for an overpowering desire to write”. This, she explains, is due habitually to alterations in particular brain areas and overlaps only partly with ‘graphomania’, or “the desire to be published”. Hypergraphia, she speculates, seems connected with the temporal lobes, the brain areas in charge of facilitating our understanding of meaning. Many hypergraphic patients appear to have suffered temporal lobe epilepsy.

It is important to clarify that hypergraphic writers are dominated by a mania for writing, by an unstoppable drive to scribble, no matter what the results are in terms of quality for, remember, this is a pathology. The ‘problem’, as noted in my April post, is that this is a condition for which sufferers demand no treatment, as they derive pleasure from writing. If you’re reading this and thinking ‘oh, well, I am certainly not at risk of being labelled hypergraphic’ you should be aware that many of us, readers, appear to be hyperlexic. Do you belong to the “subset of avid readers whose reading has an especially compulsive quality”? Do you need a book to prevent you from reading “the newspaper used to wrap the fish”? There you are: you’re hyperlexic –the proud owner of a brain in thrall to an unruly bunch of print-mad neurons. I can see the t-shirt: ‘Hyperlexia rules!’

Flaherty’s sweeping statement that “A surprising proportion of writers are manic-depressive”, is open to all kinds of jokes (‘no wonder they’re depressed seeing the state of the book market’… and so on). Surely, you can see for yourself that a) not ALL writers are manic-depressive (or have epileptic temporal lobe seizures like Dostoevsky), b) not all manic-depressives become published writers and c) if this were the case, creative writing courses should start by plunging their students into deep misery at once. An additional problem that Flaherty simply hints at is whether writer’s block, presented as a mental condition treatable with the right combination of pills and therapy, “may be culturally determined”. The phrase ‘writer’s block’, Flaherty explains, was coined by American psychiatrist Edmund Bergler and although many writers from other nations suffer from block, “there is a paradoxical sense in which suffering from writer’s block is necessary to be an American writer”. Flaherty names Russian-born, hypergraphic (=absurdly prolific) Isaac Asimov as an interesting exception but she seems confused by him; her list of writers “contains few genre writers because of the convention that genre writing isn’t quite writing”. It’s just hypergraphia, you know?

Funnily, although I intended to keep the tone of this post as straight-faced as possible my repressed sneering is surfacing throughout… Perhaps this is because I’m scared that Flaherty is right in her main claim: that the mind has a material basis in the brain; hence, alterations in the brain result in abnormalities regarding the average mind. Basically, she speculates that the passion for writing and reading might fall within the gray area of the brain alterations that, while not being pathological, are uncommon and even exceptional (abnormal?). We write and read with glee because, in short, we have funny temporal lobes that connect in a funny way with our limbic system. She may be making a totally valid point: if Usain Bolt’s body is worth studying for what it says about the abilities of record-breaking athletes, then perhaps Toni Morrison’s talent as a writer stems from the subtle chemistry of her brain. As Flaherty writes, “By scanning people thinking creatively (with the usual caveat that judging creativity is difficult), researchers may soon be able to see which patterns of brain activity underlie creativity”.

Flaherty softens the impact of her chilling scientific claims by stressing that “literature can also help us to understand science, the way it is both driven and sometimes misdirected by metaphors and emotion”. No doubt. Her arguments, however, are distressing (I can’t find another word). A point Flaherty stresses is that medication is advanced enough so that, for instance, bereaved people need not go through the intense pain of their grief by simply taking the corresponding helpful little pill. She understands why many grieving individuals reject this chemical aid, believing that lessening the intensity of grief amounts to betraying their lost beloved. To be clear about this: Flaherty claims that the more we know about our brain the better our chances will be to control emotion and mood. Like many others, I resist this idea because taking pills is for me too closely connected with taking illegal substances but, then, most people get by in this way (read Roberto Saviano’s analysis of cocaine consumption in Zero, zero, zero…). Yet, going through a very black mood this week I caught myself thinking, ‘oh, boy, my temporal lobe is misbehaving, I wish I had a little blue pill’ to go on (happily) marking exams.

How does this connect with literary creativity? Patricia Highsmith once said that writers’ favourite drug is coffee and, of course, there is a long list of literary and non-literary authors controlled by their chosen or unchosen addiction. In Flaherty’s book writers are a bundle of brain and mind irregularities, as you can see, which ultimately begs the question of whether we prefer, as a society, happy individuals or unhappy authors. That’s the only conclusion I can reach after reading her book since the well-adjusted, happy author seems not to exist in her vision of literary creativity. I wonder whether this is why literary biography always insists on presenting literary genius as practically a pathology (yes, I’ve been reading Claire Tomalin’s biography of bipolar, manic, hypergraphic Dickens). At least this is a pathology we admire.

As I read The Midnight Disease something else bothered me: the future of education. Education works on the principle that all children should start at the same point and be taught a little of everything, regardless of their abilities and preferences. Little by little, each child navigates their way into being an engineer or a star piano player (supply your own worst-case scenario). Primary and secondary education are, thus, a compound effort to teach children a common minimum denominator and to find out which particular abilities each child has. Now imagine a near future in which we will be able to scan the brain of a four-year-old while engaged in creative play and determine how his/her brain conditions his/her mind. This imaginary brain scan would have detected, for instance, my hyperlexia (‘wow, this one is a Literature teacher!’) and my limited ability to imagine space (‘no stage designer, this one’). Flaherty never says that she wants to see this implemented. However, her view that our minds are our brains implicitly suggests that we will be eventually classified in this way, just as we will be soon classified according to our genetic make-up. Pass me the happiness pill…

From an extreme, alternative point of view one might argue that education works poorly precisely because we wrongly insist on the egalitarian approach. A timely brain scan would save the little ones many painful hours of mathematics or of English soon to be forgotten –which sounds tempting– and place the children with the most promising creative abilities on the fast track to… what exactly?? We are already hearing so much cant about the so-called ‘exceptionally gifted’ children that I shudder at what the further exploration of the human brain can do to human minds.

Clearly, neurology can help us to overcome the accidents of life caused by malfunctioning brains (and it’s impressive to learn the myriad odd ways in which brains malfunction). Nonetheless, it may be overstepping its boundaries– like all medicine today, with its suspicious endless pressure to connect good health with joining expensive gyms when you’re young and with taking absurd amounts of prescription drugs as you age. There is, however, a fundamental difference between, say, correcting the ravages of diabetes and forcing literary creativity into a sort of medical freak show.

There are also other dangers: if my students learn that I’m hyperlexic (am I?… show me that brain scan), then they may reject my preaching in favour of non-stop reading on the grounds that they’re not hyperlexic themselves. Or, as the trend seems to be now, they may claim that their massive use of the social networks, the internet and videogames, has re-wired their brains in ways that my 1960s hyperlexic brain is not equipped to understand.

Pass me the little blue pill…

Comments are very welcome! (Thanks!) Just remember that I check them for spam; it might take a few days for yours to be available. Follow on Twitter the blog updates: @SaraMartinUAB. You may download the yearly volumes from http://ddd.uab.cat/record/116328. See my publications and activities on my personal web http://gent.uab.cat/saramartinalegre/

WORKING, STUDYING AND THE EVER RISING FEES: SOME UGLY THOUGHTS

[Just one sentence to say that while the activities I have been engaged in this week –exams (both oral and written), yearly doctoral interviews, last minute BA dissertation revisions– are absolutely necessary I hate how they use up the energy needed to write. With no writing (and I realize this is another sentence) it feels as if there is no point to a week, no matter how exhausting it has been… or how useful.]

Today I’m combining two items which have been waiting for attention for a while. One is an article from La Vanguardia and the other a report by the union Comisiones Obreras. I’m here interrupting myself to comment that one thing I learned while interviewing students for their oral exams this week is that students don’t read papers (which I know) but just use Twitter to check on the day’s trending topics (I guessed but didn’t know for sure). This means that, among thousands of other relevant items of information, they may have missed the two I’ll comment on. One, by the way, I found browsing the papers as I do at lunch break (I no longer read print papers… that’s for retired people as a student said); the other reached me via email, a medium that students also find obsolete and that, I’m sure, only use with us, ageing teachers.

La Vanguardia sums up the main findings by Fundació Bofill’s 125-page report Via Universitària: Ser estudiant universitari avui by Antonio Ariño Villarroya and Elena Sintes Pascual (http://www.fbofill.cat/sites/default/files/ViaUniversitaria_InformesBreus62_100516.pdf). This report is based on a survey run among 20,512 students in the 19 universities of the Catalan-speaking regions of Spain, within the network Vives. I confess I have not read the report and I refer only to the summary.

No surprises here: families are the main contributors to the cost of educating their children which, logically, puts children from impoverished social backgrounds at a serious disadvantage regarding their better-off peers. Nothing new, then, except that a matter such as taking a year abroad within the Erasmus programme is now practically compulsory, disregarding how this widens the gap between middle-class students and their poorer peers (the grants are a joke…). The report claims that 30% students finance their studies by working, part or full time; only 0.7% of the students surveyed have fallen into the trap which student loans are turning out to be. 13% enjoy some kind of grant; they are included within the 41% of students who study full time (um, the figures do not add up, do they?). More interesting findings: Mothers are crucial–it seems that the more educated a mother is, the more they invest in the education of their children (most of these mothers were themselves new in the Spanish university in relation to their family background). The report is clear: most students (above 40%) have an upper class or upper-middle class background and college-educated parents, yet many outside this group are upwardly mobile, coming from families with no college-educated members. I have never heard, however, of middle and upper-class children taking up professional training in a blue collar trade–though there must be some measure of downward social mobility even when both parents are college- educated and/or wealthy.

The Boffill report claims that combining work and study need not affect the student’s marks, though it does affect class attendance. No student, they claim, uses more than 20 hours a week to study anyway… though I don’t know whether they mean apart from attending classes. This is, excuse me, total bullshit. Along my own university years I went from being a full-time student (with my fees funded by the Government on the basis of my marks) to being a full-time worker, as my life became complicated by my father’s total lack of interest in my university education and his constant pressure for me to work full time. I left home too early, married unwisely and found myself in the obligation of doing whatever it took to study –which, of course, meant working full time, as my father wanted. Not common, perhaps, but replace ‘married unwisely’ with ‘started sharing a flat’ and then the whole situation is not that odd. This means that in my last year I did what I could to attend classes and I suffered very much for missing them. It’s true that in my first two years, when I just worked some hours a week as a private tutor to earn myself some very necessary pocket money, I had plenty of time to spare. Yet, I put it to good use, reading, visiting exhibitions, learning all I could beyond my courses. In my last year, I simply hated my life, as I didn’t know whether I was a worker or a student. Would I have got better grades? Not necessarily. I recall, however, that time as a horrid, stressful period of my life. A student should be a student, period, and that means full time. A paid job is fine as a complement but when it starts draining away energies needed for study then it’s a serious obstacle, not an aid.

The Comisiones Obreras report shows what families and students in Spain face up regarding the cost of study. This is a study of the evolution of university fees between 2011 and 2016 (see http://www.fe.ccoo.es/comunes/recursos/25/2227033-Estudio_de_precios_publicos_universitarios.pdf). No surprises here, either, though it’s frightening to see the actual figures. The report shows, to begin with, that Spain is among the very few countries in Europe to have responded to the 2008 crisis (which coincided with the implementation of the new BA and MA system in 2009) by steeply raising the university fees. It’s funny to see that the United Kingdom is neatly split between England/Wales/Northern Ireland, which decided to go as far as possible down this road with fees up to 9,000 pounds, and Scotland, where a university education costs the student very little. The report offers the figure of 6,460 euros as the average cost of the current 4+1 university education system in Spain, which is certainly nothing in comparison to the 54,728 euros the same costs in England; still, Spain has 4,000,000 unemployed people and one should think that state-funded free education should be the way our of that situation. The report reaches exactly that conclusion.

It is funny to see how different the tone is in the Bofill and the CCOO report: the former is descriptive of a situation contemplated with a certain scientific distance (the comment on upward social mobility discloses even a certain optimism), whereas the latter is clearly biased towards implementing better social policies regarding access to education. As usual, the more advanced European countries in this sense are the four Nordic ones: Sweden, Norway, Finland, Denmark –precisely the ones distinguished by a very different approach to social equality. Scotland is another interesting case, particularly for Catalonia, for its independentist aspirations have led to the realization that it must invest on the development of its human capital (though Scots have a serious problem in that their best educated citizens tend to migrate elsewhere).

Spain, in short, is just a disaster: we are keeping away from the university talented people by not giving enough grants and forcing the few who manage nonetheless to prove their brilliancy to migrate, thus doing rich nations like Germany and the United States the favour of benefitting from our scant public money. And what can I say about Catalonia? The price per credit in 2011 was already the highest in Spain (at 20.11 euros) and it’s now 33.52; the average 60 credits fee used to be 1,206 in 2011-12 but it’s now 2,011 euros. The second most expensive average yearly fee, that of the community of Madrid, is just 1,638. The lowest is 713 (in Galicia). I won’t even mention the fees for MAs, which have no explanation at all as the same staff is used to teach them with no extra cost added to our salaries.

It seems then clear that the 2008 crisis (still ongoing in Spain as Brussels knows and the Government wilfully denies) must have expelled many thousands from the Spanish university: those who suffered from some personal calamity like their parents or themselves losing their jobs, and those who could never afford the ever increasing university fees. The crisis, in case I have not insisted sufficiently on this here, has also done away with the full-time teaching jobs that allowed PhD candidates to complete their dissertations. And, yes, we all know that things are worse in Catalonia, for obscure political reasons that are never evident enough, whether they are national Spanish or national Catalan.

There are days when nothing makes sense. If the idea is going back to the smaller middle-class Spanish university of the 1970s, before Felipe González’s Government opened up the classroom to us, working-class children, I wish they would tell us. The same applies to the even scarier impression that perhaps the plan is shutting down the public university for good. What cannot be sustained is this constant anxiety that we’re not wanted: the students, the teachers, the research, the whole university. Why all this ill-treatment? How are we offending society?

Perhaps, just perhaps, what is feared, after all, is the downward mobility which I mentioned, for if the university is made accessible to the best students, no matter what class they come from, this means necessarily that the room at the top for the upper classes will shrink. After all, there are no good jobs for everyone with a university education, as we know, so why not make sure these are not available to working-class persons, beginning by making sure they never get the required university education?

Just an ugly thought, as who would jeopardise the future of a whole nation in this way, right?

Comments are very welcome! (Thanks!) Just remember that I check them for spam; it might take a few days for yours to be available. Follow on Twitter the blog updates: @SaraMartinUAB. You may download the yearly volumes from http://ddd.uab.cat/record/116328. See my publications and activities on my personal web http://gent.uab.cat/saramartinalegre/