June 9th, 2019

In my post of 6 May on the question of the post-human in relation to Frankenstein, I announced that my ranting would eventually continue, so here we go.

Mónica Calvo and Sonia Baelo, members of the research project “Trauma, Culture and Posthumanity: the Definition of Being in Contemporary North-American Fiction”, of the Universidad de Zaragoza, were the organizers of the recent conference “Representations in the Time of the Post-human: Transhuman Enhancement in 21st Century Storytelling”, which I attended (and enjoyed enormously!). You might want to download the programme, and the truly cute poster, from

The three days spent there thinking about post-humanism have convinced me that we have the very bad habit in scholarship of accepting labels first and discussing what they mean later. This leads to considerable confusion. Post-human is used in such wide-ranging sense that in a recent article I reviewed, the author called the dinosaurs in Jurassic Park post-human monsters (actually, following a secondary source). The funny thing is that though I rejected this denomination as plainly wrong, depending on how you use post-human it is correct – and, also, a clear proof of how we need more specific labels.

Every discussion, then, of post-humanism begins with a lengthy list of secondary sources that give different meanings to the label, until the author offers his/her own. If the author tries to offer alternatives or be more specific in any way, this is done in vain for the curious thing is that the label is there for good, no matter how blurry it is. We have clearly not learned the lesson from the endless waste of time and energy that discussions around the word post-modernism (postmodernism?) have generated, and here we are again stuck with a problematic but absolutely central notion, once more. Even the Wikipedia page is no use! (

I don’t intend, then, to trace a genealogy of post-humanism but to explain where I think the problems lie in its definition, for those who care. I am possibly totally wrong, but this goes in favour of my argument that the label is confusing. And I might also repeat some of the ideas in the post of May 6, but, then, I have my own (human) limitations…

To begin with, then, post-human is used in two very different ways that, while interconnected, refer to two different aspects of humankind.

1) What I’ll call biological post-humanism explores the possible replacement of Homo Sapiens by another Homo species emerging from
a) natural evolution
b) applying cutting-edge technoscience to evolution (a crazy, dangerous position defended by transhumanism)
c) the merger of the flesh with A.I. (as technogeek defenders of the so-called singularity dream of).

In scenario d) Homo Sapiens disappears, and instead a new species takes our dominant position, whether this is an animal (Planet of the Apes), an A.I. (the Terminator series), or an alien (name your favourite invasion story). A possibility less often considered is the scenario in which Homo Sapiens evolves into another Homo species with genetic elements from animals or aliens (but do consider Octavia Butler’s trilogy Lilith’s Brood). And, of course, in 2001 Stanley Kubrick and Arthur C. Clarke imagined that a mysterious alien presence (remember the monolith?) had jump-started our transition from Australopithecus into the genus Homo and would again repeat the feat in the future, to turn us into something yet unknown. I offered, by the way, the label post-natural for all of this in Zaragoza but I was told that ‘nobody uses it’ and that was it!

2) Philosophical, or critical, post-humanism can be subdivided, I think, into two branches (though, again, I must warn that they tend to be mixed anyway):
a) the branch that wishes to rethink classical humanism in relation to what it means to be human in ethical or moral terms
b) the branch that shares a similar concern but also worries about how (biological) post-humanism will alter our bodies and minds, and therefore what it means to be human.

Critical post-humanism began as an intellectual project to question the way in which privileged Renaissance men had used prejudiced, limiting values for the construction of humanism. The patriarchal white man should be rejected as the source for the definition of what it is to be human, since his experience excluded basically the majority of humankind. Those so far excluded, therefore, felt called to offer a new, far more comprehensive way of understanding the human and humanism.

The problem, in my humble view, is that this meant throwing the baby away with the bathwater. Since the white patriarchs had appropriated the word human for their own interests, the alternative label chosen was post-human – an unfortunate choice, since it places the critical majority on the wrong side of human. Post-humanism was intended to define the opposition against biased classical humanism, but it has ended up making that type of humanism central, and the alternative peripheral (because of the injudicious use of the prefix post-). Besides, I personally feel aggrieved as a woman to be called a post-humanist because of my critical anti-patriarchal thinking when, last time I looked, it seemed to me I’m Homo Sapiens (well, I haven’t checked how much Home Neanderthalensis DNA is in my genes!). I reclaim, then, the right to call myself a humanist, not post-anything but the real thing, though with different values. Neo-humanist would have been cooler (particularly because everything I read Neo, I think of Keanu Reeves in The Matrix…).

On the other hand, my impression is that there are many difficulties to connect philosophical post-humanism (on the essence of the human) with thinking on biological post-humanism. Problem number one is the fact that those of us in the Humanities know too little science to make informed contributions to the debate – I’m really serious about this, though I do not mean that only scientists are entitled to offering reflections on what makes us human. No, what I mean is what I wrote in my post of May 6: Homo Sapiens is just ONE type of human, not all that is human, which means that we should brush up our paleontology, biology, genomics, etc. Typically, I got entangled in the Zaragoza conference in a loud debate with another colleague, who claimed that ‘the system’ and those who oppress us are not ‘human’. Having spent the last fifteen months of my life considering villainy, I can tell you that of course they are! Patriarchal villainy is as human as the compulsion to do good, and we will never progress unless we overcome that hurdle. In fact, I think we should do much better if we focused on ‘humane’ instead of ‘human’ to explain how some persons feel inclined to abuse their power and others to oppose this inclination.

Since 1985, when Donna Haraway published her ‘Manifesto for Cyborgs’, critical post-humanism has evolved into more science-conscious intellectualism but it is still limited by a) the little awareness of technoscientific issues which I have already mentioned, b) the reluctance to acknowledge science fiction as a major aspect of speculative reflection on Homo Sapiens as a species. I know next to nothing about science but what little I know comes from first reading SF novels, and then reading essays to check whether what they speculate with makes sense. Whenever I explain to an audience of even less informed readers where the world is heading, there is usually much surprise and much incredulity. What I feel is quite different: there are days when I wonder how we can live with the knowledge that our place in the universe is absolutely insignificant, as science is showing. The dire warnings about climate change may be altering this general neglect of science but even so, look at how the deniers insist that Homo Sapiens is in control and the Earth safe (we are not, and it is not).

If you have been following my rant, then, you will see that I’m trying to make sense of post-human and post-humanism by telling myself that:
a) (biological) post-humanism considers what might happen when/if the species Homo Sapiens ends, in natural or unnatural ways
b) (philosophical) critical post-humanism is focused on what makes us humane (even though the label preferred is human)
In my view, then, any consideration of our subjectivity passes through remembering that 1) as Homo Sapiens, we are just an animal species, and we possibly did all we could to wipe out the other human species as we’re doing to animals; 2) Homo Sapiens individuals are all human though many of us are not humane; 3) we matter very little in the amazingly gigantic universe and nobody out there cares for us; 4) since we’re doing an awful job of destroying Earth it would be totally fine if we were wiped out (I’m in favour of plants conquering the planet!); 5) transhumanism (=the use of technoscience to transcend the limitations of Homo Sapiens, including death) is classic patriarchal selfish wickedness; and 6) please, can we stop using the prefix post- for everything? I fear the day when I will be called post-person!

Incidentally, the dinosaurs in Jurassic Park are said to be post-human because their rebirth from fossil DNA disrupts the species’ balance on Earth and announces (at least in Michael Crichton’s original novel) the end of Homo Sapiens’ dominion. In that scenario, we become either extinct –as dinosaurs are– or creatures cowering before the power of mighty predators –as we were once. The new dinosaurs are what comes after humanity is pushed off the top-rung of the animal ladder, hence it makes sense, more or less, to call them post-human. I rejected the terminology because, though they are a product of Homo Sapiens’ science, the dinosaurs are not genetically connected with us at all, and I limit my use of post-human to that sense.

The thought that sends chills down my spine is that from the point of view of all the other human species that have died out we, Homo Sapiens, are the real, most feared post-humans. Yet, here we are, hypocritically expressing our fears that our species might die and be eventually replaced. Poor things! If you ask me, we’re just a bunch of selfish, arrogant bastards and bitches that deserve never seeing how happy and relieved Earth will be in its post-Homo Sapiens future… Towards the end of Jurassic Park, mathematician Ian Malcolm notes that whereas for a human being one hundred years is the limit of life, the Earth counts its life in millions of years: ‘We can’t imagine its slow and powerful rhythms, and we haven’t got the humility to try. We have been residents here for the blink of an eye. If we are gone tomorrow, the Earth will not miss us’ (my italics). Wise words, though I hope Dr. Malcolm is also right in his perhaps naïve belief that we don’t have ‘the power to destroy the planet’, for surely the Earth deserves the chance of a post-Homo Sapiens life. Call it post-human, if you prefer, though there might be nowhere around to remember us, nor care that we once existed.

I publish a post once a week (follow @SaraMartinUAB). Comments are very welcome! Download the yearly volumes from: My web:


May 26th, 2019

The teachers and researchers of all Catalan universities have been called to strike on Tuesday 28 in protest against the appalling conditions under which the non-permanent staff work. The article by the branch of the workers’ union CGT which operates in my own university, UAB, explains that Royal Decree 103/2019, on the rights of trainee researchers (Estatuto del Personal Investigador en Formación, EPIF), is insufficient and, anyway, it is not being applied, which puts UAB on the side of illegality ( ). The call to strike refers both to part-time associates and to full-time doctoral and post-doctoral researchers who enjoy fellowships and grants, and, most importantly, to the lack of tenured positions they might occupy one day.

A friend told me recently that one of the main weaknesses of the academic sector is that we are not solidary with each other, which is why our protests always fail. This makes me feel quite bad about my decision not to join the strike, but, then, it is my habit to systematically reject all calls of that nature. I am a civil servant offering a public service and I don’t see why my students should be negatively affected by my refusal to work, no matter how justified the cause. Actually, I believe that strikes have lost their edge in the education sector, as there are so many every year that Governments (local, national) just do not pay any attention to the protesters. Other forms of activism are needed, and, so, this is what I am doing today: inform my students, and anyone interested, about what is going on.

I have described the situation many times in this blog, and what follows may sound repetitive, but this is one of the problems: nothing has changed since September 2010, when I started writing here, and certainly for some years before. To recap a very old story, until 2002, when I got tenure, you just needed to be a doctor in order to apply for a permanent position. Obtaining it depended, logically, on the quality of your CV and competition was anyway harsh, but on average you could get a permanent job around the age of 36 (it used to be 30, or slightly below, in the early 1990s). Next came the ‘habilitaciones’, an evil system which meant that candidates to positions had to demonstrate first their qualifications to a tribunal which could be sitting hundreds of miles away from home. This was expensive, tedious, anxiety-inducing for the members of the tribunals (who had to interrupt their lives often for months, regardless of their family situation) and evidently for the harassed candidates (who often had to try several times in different cities). Once you obtained your ‘habilitación’, you had to apply for tenure in a specific university and compete with other qualified candidates. ANECA, technically a private foundation attached to the Government, created in 2001, was given in 2007 the crucial function of organizing a new accreditation system to replace the nomadic ‘habilitaciones’, centralized in Madrid but mostly run online. Under this new system, imitated as we know by local agencies such as Catalan AQU, candidates must fill in a complex, time-consuming online application before being certified apt by the corresponding commission. Then you can apply to a university position. If you find any.

The perfect storm that risks demolishing the public Spanish university has been caused by the confluence of two incompatible circumstances: ANECA’s demands from candidates have been increasing–in principle to secure that better research is done and better teaching offered–whereas the 2008 economic crisis (about to be repeated) has destroyed all the junior full-time positions that trainee researchers used to occupy. Very optimistically, ANECA (and the other agencies) suppose that applicants have produced their PhD dissertations while being the recipients of a grant, and that they have next found post-doctoral grants, etc. In fact, most junior researchers are part-time associate teachers, which is incongruous because associates are, by definition, professionals who contribute their expertise to the universities for a few hours a week, and not academics aspiring to tenure. The Spanish public university suffers because of all this from a most dangerous split between the older, tenured teachers (average age 53, a third or more inactive in research) and the younger, non-permanent staff who should one day replace us, if they survive their frantic daily schedules. In fact, the 2008 crisis and the associate contracts have destroyed the chances of a whole generation (now in their forties and even fifties) to access tenured positions. And I am by no means as optimistic as ANECA, which appears to believe that all those currently beginning their PhDs will be eventually tenured.

We were told, around 2008, as a collective that Spain was not doing well in research and that we needed to raise the bar, hence the increasing demands of the accreditation system and of the assessment system (I refer here to the ‘sexenios’ that examine our academic production). The rationale behind this is that if we applied measuring systems borrowed from first-rank foreign academic environments this would increase our productivity and the quality of our research and teaching. Three problems, however, have emerged.

Here comes number one. Whereas in the past having a PhD was enough (being a ‘doctor’ means that you are ready to offer innovative teaching and research), now this is just the beginning of a long post-doctoral period that has delayed tenure to the age of 40, if you’re lucky, and with the addition of total geographical mobility within Spain. This means that private life is totally subordinated to the needs of academia, a situation which punishes women severely since the decade between 30 and 40 is when we have babies. Since, besides, men tend to leave women the moment they choose to move elsewhere for their careers, this means that few women scholars can succeed in the terms that are most highly praised, namely, by becoming an internationally known scholar. My personal impression is that the persons earning tenure at 40, or later, in the current system could have also earned it at 30 under the older system. And, obviously, we run a major risk: faced with this perspective of a long professional post-MA training, of 17 years…, most budding scholars will simply give up. Specially the young women, right now the majority in the Humanities.

Problem number two: without young full-time staff we, seniors, are collapsing, too. Here’s how I feel this week: seriously depressed. Why? Well, because after almost 28 years as a teacher/researcher I have a very clear perception that I will leave nothing behind. Since we have no full-time colleagues to train, and replace us, but a succession of part-time associates, when we retire our research area will retire with us. Overall, I feel, besides, very much isolated. I work mostly alone, either at home or in my university office, and I never meet my colleagues for a distended chat. Formal meetings are increasingly hard to organize because they conflict with the overworked associates’ hectic schedules. Informal meetings do not happen because we are too busy working for the glory of our CVs and we have no time to spare. And, anyway, when we speak our topic is invariably the pathetic state of the university. I just wonder where intellectual life is happening, if it is happening anywhere. I feel, besides, frustrated that all new projects to do something exciting never get started or are always provisional. Our book club is run by an associate who might be gone any day. When an enthusiastic associate and I visited the head of audio-visual services at UAB last week, to ask for advice about the project of opening a YouTube channel for the Department, the first question he asked was whether it would have permanent staff in charge. Too often, he said, new projects are started by keen associates only to be abandoned as soon as their contracts expire. My colleague replied that hers would last at least for… four years.

The third problem is that we are following foreign models of research and teaching assessment already imploding elsewhere. You may read, for instance, Anna Fazackerley’s article of 21 May, “‘It’s cut-throat’: half of UK academics stressed and 40% thinking of leaving” ( In the British system there is technically no tenure: teachers do not become civil servants but are hired for life (like in the Generalitat-run Catalan system). This is why so many are thinking of quitting. In our case, we, tenured teachers, develop a sort of bad marriage relationship with our jobs: I realized recently that I am constantly protecting myself from my academic career, as if it were an abusive partner. In Britain there is an additional misery to deal with: academics are made responsible for the recruiting of the many students to guarantee the financial stability of their institutions. Aware that they are coveted clients, students have learned to disrespect their teachers even more than we are disrespected here (as supposedly lazy, privileged ‘funcionarios’… which some are indeed).

Fazackerley’s piece is actually based on a report about the wellbeing of British academics ( ), which, as you may imagine, leads to worrying conclusions. Reading it, I even wondered whether we have a right to our wellbeing as tenured teachers, in view of the ill-treatment that associate teachers and post-docs are victims of. Of course, this is one of the most devious tools of the system: making you feel bad about tenure you have earned with great effort. Anyway, the report notes that “Wellbeing is maximised when people feel valued, well-managed, have good workplace collegiality and can act with agency and autonomy”. However, our wellbeing is being eroded by, they say, “management approaches that prioritised accountability measures and executive tasks over teaching, learning and research tasks”, though in the case of Spain I should say this is different. Here there is, simply, an obsession for publishing based on scientific principles that just fails to understand what we do in the Humanities (and I mean ‘should do’, namely, think slowly). The British report concludes that “In general, respondents did not feel empowered to make a difference to the way that Higher Education institutions deal with wellbeing issues and this generated some cynicism”. That’s right: one day you feel depressed, the next one cynical, and so on. Even angry which, unfortunately, may affect classroom mood and lead to burnout.

I have already mentioned the sense of isolation (what the report calls ‘lack of collegiality’). The Guardian article highlights, as well, the stress caused by the frequent rejection of work for publication (which begins now at PhD level), the pressure caused by deadlines, the impossibility of excelling at the three branches of our jobs (teaching, research, admin tasks), and two more factors I’d like to consider a bit more deeply. One is that the rules change all the time and the top bar keeps moving. The other is how you are judged by what you have not done, despite having done a lot.

We are being told by the agencies which judge us that our planning should be improved, that it to say, that we should focus on publishing in A-list journals and not waste time in other academic activities. I acknowledge that I don’t know how to do that: I get many rejections from the top journals, I am invited to contribute to books that I love but that are worth nothing for the agencies, and so on. And the other way around: projects I have committed to, thinking they would bring nothing worth adding to my CV, have led to the best work I have done so far. Anyway, since the rules about what is a merit and what a demerit are changing all the time, you cannot really plan your career. You may choose, for instance, to be Head of Department for four years, and diminish the pace of your research at risk of failing your ‘sexenio’ assessment, only to find later on that admin work does not really count towards qualifying as full professor. I constantly suffer, in addition, from impostor’s syndrome because I have chosen to be very productive in some lines of my work but not invest time in others that the official agencies prefer. I certainly feel that my rather long, full CV is simply not good enough even though I have done my best. And intend to go on doing so until I retire.

Will this situation implode? I think it might, and soon enough. So far, we have been relying on a constant supply of young, eager volunteers to accept whatever poor conditions the university offers, for the sake of the glamour attached to presenting yourself as a higher education employee. If, however, that glamour, which was never real, goes on being eroded, young people will find something else to do. At this point, I do not recommend to anyone that they begin an academic career. If you’re talented enough, train yourself up to PhD level, and then find alternatives to disseminate knowledge through self-employment (I would say online audio-visual work).

In view of the situation in Britain, we might conclude that the situation is about to reach a tipping point all over the Western world, for something needs to give in. Naturally, the solution for Spain is more money, a return to full-time contracts at non-tenured level, simplifying the process of accreditation, and offering more tenured positions around age 35 at the latest. Unless there is, as many suspect, a plan afoot to destroy the public university and, with it, the social mobility it has afforded to some working-class individuals (not that many). What is going on cannot be, however, that clever and it is possibly just the product of political short-sightedness, compounded with–yes, my friend–our inability to present a common front before society as a collective, and defend our lives from this constant stress.

And on this bitter note, here finishes my contribution to the strike.

I publish a post every Tuesday (follow @SaraMartinUAB). Comments are very welcome! Download the yearly volumes from: My web:


May 20th, 2019

It is just impossible not to refer today to the controversial finale of HBO’s series Game of Thrones, which surely has put 19 May 2019 in the history books about fiction for ever. While the internet rages, divided into lovers and haters of the ill-conceived eighth season (more than 1,100,000 people have already signed the petition to have it thoroughly re-written and re-shot), it is no doubt a good moment to consider whether chivalric romance has won the fight with mimetic fiction that Cervantes immortalised in Don Quijote (1605, 1615).

I must clarify that I am by no means a fan of Game of Thrones. I watched the first two seasons, and read the first two novels, and that was more than enough for me. I have been following, however, the plot summaries (I must recommend those by El Mundo Today), for I felt an inescapable obligation to know what was going on. Pared down to its bare bones, then, the series has narrated the extremely violent struggle for the possession of power in the context of pseudo-medieval, feudal fantasy–hardly a theme that appeals to me, for its overt patriarchal ideology. Women have participated in that struggle, as they did in the real Middle Ages (and later), only from positions left empty by dead men, and not as persons with the same rights. Since in the eight years which the series has lasted the debate about women’s feminist empowerment has grown spectacularly, this has created enormous confusion about the female characters in Game of Thrones. I’ll say it once more: the degree of respect and equality for women should NOT be measured by their representation in fiction written by MEN but by women’s participation in audio-visual media as creators. In Game of Thrones this has been awfully low.

[SPOILERS IN THIS PARAGRAPH] I’ll add that I am very sorry for those who named their daughters Khaleesi or Daenerys–you should always wait for the end of a series before making that type of serious decision! Perhaps it is now time to think why so many women have endorsed a story that has ultimately justified the murder of its most powerful female character by a man who supposedly loves her, and who is then allowed (by other men) to walk free, despite this feminicide. And the other way around: we need to ponder why this brutal woman, a downright villain no matter how victimised she was once, has been celebrated as a positive hero. Just because she us young and pretty? All Daenerys ever wanted was power for herself, to sit on the throne and play crowned dictator, not to change the lives of others for good. This is the reason why she needs to be called a villain. In short: patriarchy has scored a victory with GoT: we are hungry for female heroes, and they have given us a villainess (or two, if we count Cersei, of course). Sansa and Arya (and Brienne) are just what they always have been: consolatory nonsense, as the late Angela Carter would say. Next time around, please all of you, women and men who hate patriarchy, reject its products.

Now, back to my topic: leaving gender issues aside (supposing we can), has chivalric romance won over mimetic fiction with GoT? Was the battle skewed since its inception? Did Cervantes really intend us to follow Alonso Quijano in his madness, induced by reading so much high fantasy? Or is the collective passion for GoT the kind of insanity Cervantes warned us against? I don’t have room here to explore this in much detail but since I have a class to teach tomorrow about Pride and Prejudice, I do want to trace here briefly the frontlines in the battlefield to see how they stand. Austen once wrote her own Cervantine anti-fantasy novel, Northanger Abbey, a frontal attack against gothic, published posthumously in 1818. If she were alive today, she would be possibly groaning and sharpening her computer keyword to pen an onslaught onto fantasy with dragons…

The thesis I am going to defend is that we are at a crossroads: mimetic fiction as practiced by Jane Austen and company cannot fight the primary impulse that favours fantasy; yet, fantasy seems unable to renew itself and satisfy the demands of its consumers (above all, of women seeking post-sexist stories). Both mimetic fiction and fantasy fiction, I maintain, are reaching an impasse. The popularity of television series is contributing to that impasse by eroding the novel in favour of the audio-visual and by maintaining an anachronistic writing system that, as we have seen, can no longer ignore the voice of the (angry) spectator.

Histories of literature usually present realistic/mimetic fiction as the centre of the Literature worth reading, leaving fantasy at the margins. Academia, however, has been partly colonized since the 1980s by scholars with very different values, quite capable, besides, of reading both mimetic and fantastic fiction (here I mean the three modes: fantasy, gothic, and sf). This has been changing the perception of how fiction works, with non-mimetic fiction gaining more ground but with the main line still attributed to realist fiction. My point is that, in fact, GoT certifies that we have been narrating a very biased version of literary history: mimetic fiction has not only been unable to stem the tide of fantasy but has also given fantasy some key elements–the melodrama of the 18th century novel of sensibility, the historical fiction of the Romantic period, and the verisimilitude that the old romances lacked with the mighty Victorian novel. When J.R.R. Tolkien changed fantasy for ever with The Lord of the Rings (1954-55), all those elements solidified.

So let me trace the genealogy, briefly. Chivalric romances, written in a variety of European languages, started as epic tales in verse to become prose narrative by the early 13th century. I don’t know enough Spanish Literature to understand why Cervantes focused in the early 17th century on the dangers of reading a genre that had been around for centuries. Amadís de Gaula by Garci Rodríguez de Montalbo is supposed to have been written in 1304, though it become really popular after the introduction of printing (c. 1440s). Le Morte d’Arthur (1485, Thomas Mallory) and Tirant lo Blanc (1490, Joanot Martorell, Martí Joan de Galba) are closer to Quijote but even so, he is driven mad by very old-fashioned texts, if I understand this correctly.

El Ingenioso Hidalgo Don Quijote de la Mancha (1605, 1615) came too early to have an immediate impact, for the novel, so to speak, was not yet ready to be born. Thomas Shelton was the first to translate the two volumes into English (this was the first translation ever) in 1612 and 1620 but it was not until the 18th century that Cervantes could truly impact the realist novel. Tobias Smollett, who translated El Quijote in 1755 is usually included in the list of British authors of the sentimental novel (or novel of sensibility) but he seems to have picked up from Cervantes a major distrust of any fiction aimed at eliciting excitement rather than intellectual pleasure. Henry Fielding, who mercilessly mocked Samuel Richardson’s quintessential sentimental novel Pamela, or Virtue Rewarded (1740) with Shamela (1741), took Cervantes’s mantle to propose a style of narrating full of authorial irony, which Jane Austen eventually inherited. The History of Tom Jones, a Foundling (1749) remains Fielding’s masterpiece.

Jane Austen’s own mimetic fiction can be said to be a belated type of sentimental fiction and at the same time as example of double resistance to this sub-genre and to gothic. Austen cannot have enjoyed the excesses of Richardson’s tale of rape Clarissa: Or the History of a Young Lady (1748) nor the silliness of Henry Mackenzie’s The Man of Feeling (1771) but I do see her having a good laugh at Charlotte Lennox’s The Female Quixote; or, The Adventures of Arabella (1752), Oliver Goldsmith’s The Vicar of Wakefield (1766) and, of course, admiring Fanny Burney’s Evelina (1778) or Maria Edgeworth’s Castle Rackrent (1800). Austen, plainly, did not enjoy what most of her contemporary readers preferred: not only sentimental fiction but, mostly, gothic, from Horace Walpole’s pioneering The Castle of Otranto (1764) to Mary Shelley’s Frankenstein (1818), passing through Ann Radcliffe’s best-selling The Mysteries of Udolpho (1794) and Matthew Lewis’s frankly scandalous The Monk (1796). I’m 100% sure that George R.R. Martin has read, and heavily underlined, Lewis’s novel.

Gothic brought fiction back the Middle Ages as the backdrop for countless horrific thrillers about innocent heroines chased by appalling villains. At the time when the genre had been around already for about fifty years, Walter Scott (1771-1832) expunged the fantasy elements to turn the past into the stuff of the new historical novels. The Waverley Novels (1814-1832), with hits such as Ivanhoe (1820), prepared the ground for the grafting of the old chivalric romance, purged of the less palatable that so worried Cervantes onto the fictional model of the historical novel. William Morris laid the foundation for what was later known as high fantasy, heroic fantasy or sword and sorcery with his prose narratives A Tale of the House of the Wolfings and All the Kindreds of the Mark (1889), The Wood Beyond the World (1894) and The Well at the World’s End (1896). Morris’s translations, in partnership with Eiríkr Magnússon, of the Story of the Volsungs and Niblungs (1870) and these novels were a direct inspiration for Tolkien.

The Lord of the Rings is called a novel, not a romance, and this is what it is. H.G. Wells must have been among the last novelists to call his fantasy fiction ‘romance’ (a word we now use, confusingly, for romantic fiction similar to Austen’s). I might be completely wrong but, as I understand the matter, whereas in the old type of romance which Alonso Quijano enjoyed reading most elements were highly improbable, the new kind of romance (from Morris and Wells onwards) has learned the lesson of verisimilitude from the novel. Its plot is still impossible but, once we suspend our disbelief, each scene seems plausible, that is to say, the characters interact realistically, as they would do in a mimetic novel. This is how the battle against mimetic fiction is being won: if you can have similar complex characterisation, a naturalistic type of dialogue, and a thrilling setting, why not choose fantasy over fiction set in the too well-known realm of realistic representation?

The post-Tolkien realism of fantasy (call it the neo-romance), however, is also its bane. You may include as many dragons as you please, and give some of your characters magical powers, but it is simply impossible to write first-class fantasy (or gothic, or science fiction) which is not rooted in the real world. I do not mean by this that the best fantasy is necessarily allegorical: what I mean is that since characters in current fantasy must act realistically, they are shaped by expectations very similar to those shaping characters in mimetic fiction. If you had Harry Potter fight corporate villainy instead of a dark wizard, with no magical elements, the tale would be more boring but, basically, the same story (if would be closer to John le Carré’s The Constant Gardener). And the other way around: just because Daenerys has a special bond with her dragons, this does not mean that you may disregard the feminist expectations piled on her by so many female and male readers, based on their experience of real life (and not of handling dragons). Hence the impasse…

Ironically, then, we need to go back to Jane Austen for the fantasy of female empowerment, which allows the relatively poor Elizabeth Bennet to marry upper-class Darcy and climb in this way many rungs up the social ladder. Cinderella wins the game and gets to be, presumably, happy. In contrast, Game of Thrones has taken its ultra-realism so far that we are literally left with a colossal pile of ashes and the mounting anger of the many fans who thought that by endorsing fantasy they were supporting the alternative to the conservatism behind most mimetic fiction. It’s game over, not for fantasy but for fiction which does not listen to its readers and that can only tell tales of violence, with no sense of wonder or of hope – which is what we really need.

I publish a post every Tuesday (follow @SaraMartinUAB). Comments are very welcome! Download the yearly volumes from: My web:


May 13th, 2019

I was interviewed last week on a Catalan-language radio show on monsters (“AutoCine: Els Monstres”, Cerdanyola Ràdio, ). The presenter’s last question was ‘which famous monster is most imperfectly known?’ and I had to reply that this is Frankenstein’s creature.

Unfortunately, the movies have transmitted a very limited image of this monster, based on the theatrical line descended from Presumption; or, the Fate of Frankenstein (1823), the melodrama (with songs!) by Richard Brinsley Peake. This was the first adaptation of Mary Shelley’s novel and, as happens with modern film adaptations, many audience members took for granted its fidelity. The famous 1931 film directed by James Whale is, in fact, based on the 1927 play by English author Peggy Webbling, who must have been familiar with Peake’s play. She, like him, characterises the monster as an inarticulate being, incapable of uttering any coherent speech. Webbling, incidentally, is also responsible for the absurdity of calling the creature by his maker’s name. The monster speaks in later films (for instance in Roger Corman’s 1990 Frankenstein Unbound, based on Brian Aldiss’s novel) but only Kenneth Branagh’s 1994 adaptation reflects Mary’s original conception of the creature as an intelligent, perceptive individual. Even so, Branagh’s cannot be said to give an accurate picture of the monster’s acumen and singular process of self-education.

Many critics have disputed Mary’s authorial decisions about this self-education. The monster, if you recall, takes shelter secretly in a hovel attached to the humble home of the De Laceys, a French family down on their luck for political and personal reasons. The arrival of the son’s Turkish fiancée, Saffie, is used by Mary as the excuse to have the monster witness her education, which he mimics. Since the monster, as I explained in the previous post, is an enhanced (or augmented) Homo Sapiens, I’m ready to accept that he can profit by this second-hand method of learning, though I grant that the whole process does test the reader’s willing suspension of disbelief. This is further tested with the monster’s casual discovery of three fundamental books (John Milton’s epic biblical poem Paradise Lost, a volume of Plutarch’s Lives, and Goethe’s Sorrows of Young Werther). He also happens to be in possession, very conveniently, of Frankenstein’s journal. This volume covers the several months of the research leading to the creature’s creation and the monster has it because Victor kept him in the cloak which the creature takes to cover his naked body.

By the time creature and creator meet in the Alps, the monster can already use sophisticated speech, though he has never had the chance to interact with a fellow human being: all run away scared, or turn against him violently, as soon as they see him. If he tries to speak, this is to no avail–his monstrous physiognomy causes such overreaction that communication is simply impossible. If Victor can overcome his revulsion and sit down to patiently listen to his ‘son’, this is only because he has no option. His parental duty, as we know, is of no consequence, for the moment his baby was born, Frankenstein turned his back on him, expecting the ugly thing to vanish, somehow. The monster, however, insists that Victor must play the role of parent like any other father.

I’d like to comment on two passages, often quoted but, anyway, worth considering in order to learn who this monster is. I find it quite peculiar that in his process of self-learning the creature chooses no name for himself, for this complicates our reading very much. Very obviously, he is a man, for Victor has made him as such, and calling this new man ‘the monster’ and ‘the creature’ is something I very much dislike, since it is demeaning. The obvious name for him is Adam (a name he knows from reading Milton’s version of the Biblical fall in Genesis) but, for whatever reason, Mary kept him nameless, a questionable decision that somehow shows her bias against her own creation. (And that, indeed, confused Peggy Webbling…).

In Chapter 15, the monster tells Victor about his having read the diary narrating his ‘accursed origin’ and the ‘disgusting circumstances’ of his unnatural birth. The diary also contains ‘the minutest description of my odious and loathsome person (…) in language which painted your own horrors and rendered mine indelible’. No wonder he is ‘sickened’. Logically, he questions Victor’s methods: ‘God, in pity, made man beautiful and alluring, after his own image; but my form is a filthy type of yours, more horrid even from the very resemblance. Satan had his companions, fellow devils, to admire and encourage him, but I am solitary and abhorred’. From this passage one must deduce that the monster does not look radically non-human but horridly human, and that his physical appearance is scary for that very reason. His ugliness, in short, is our own ugliness, as if you could take an average human being and deprive him of any feature that makes him moderately attractive. I remain, in any case, perplexed by the reaction of those who come across Victor’s new Adam, for they seem to lack the curiosity that led so many spectators to enjoy the strange frisson provided by freak shows in the 19th and the 20th centuries. The monster, let’s stop to consider for a second, does look human: he has no claws, or big fangs, or any other feature we connect with aggression–so why do people scream and run away at his sight? I do not quite understand why nobody stops, once the shivers are controlled, to ask him ‘what are you?’

Faced with his general rejection, the monster assumes his abjection and starts behaving in a vicious manner which corresponds morally to the ugliness of his physical appearance. As we know, he kills Victor’s youngest brother William and blames poor Justine, a mixture of servant and family member, for that crime. When he demands, in Chapter 17, from his creator that he manufactures a female companion to share his misfortune with, Frankenstein expresses serious doubts that this can be a solution to the problem of how to contain his evident ‘malice’. The monster is offended: ‘My vices are the children of a forced solitude that I abhor, and my virtues will necessarily arise when I live in communion with an equal. I shall feel the affections of a sensitive being and become linked to the chain of existence and events from which I am now excluded’. Famously, in The Bride of Frankenstein (1935), also directed by James Whale, the female monster starts screaming the moment she sees her intended male companion; she shows, instead, a manifest interest in the rather handsome Frankenstein… The novel has no similar scene because Victor decides to abort the bride, but it is very easy to see that the monster’s logic is very faulty, and sexist. He (that is, Mary) never thinks of the needs that the new Eve might have; in fact, she is to provide the same comforts as the later Victorian angel in the house: companionship but, above all, but, above all, unconditional love and even admiration which will supposedly curb down the monster’s alleged inclination to do evil. ‘Give me a nice woman and I’ll be a nice man’ is a recipe that, we know, does not work at all well.

Victor’s new Adam is, in the early stages of his life, a meek, well-behaved individual that gradually learns to respond with aggression to the abhorrence he is treated with. This is an obvious reading. I believe, however, that he is also naturally spiteful and resentful. I don’t mean naturally malevolent but the type of individual that will bear a grudge down to the last consequences. Granted, the grudge he bears against Frankenstein is more than justified but the decision he makes to murder William and, later, Victor’s bride Elizabeth is unfair to the victims and, ultimately, counterproductive. Naturally, we should not forget that Mary intended Frankenstein to be a gothic story and she had to stress the moral monstrosity of the creature. In her argumentation, the monster is corrupted, so to speak, by the animosity people display against him and, so, the community if partly responsible for his crimes. However, you cannot be both innocent and guilty of the murders you choose to commit, and this is the unstable position in which Mary places her new Adam. Super-human as he is in many aspects of his anatomy, he is, nevertheless, very human in the worst aspects of his personality: his capacity for hatred and violence. Nothing will convince me that the creature would have been a good companion for the bride. Or a good father to their children.

The very fact that I am discussing these moral issues shows how complex the characterisation of Mary’s monster is. In the end, the main challenge she poses to her readers is forcing us to wonder how we would react if we ever came across Victor’s man. Would we give him a chance to explain himself? Would we be part of the mob chasing the poor thing in so many films? Would we be disgusted, fascinated, or both? How much difference from our human standard, in short, are we willing to tolerate in our fellow human beings? These are all valid questions, and I marvel that an eighteen-year-old girl could manage to put them together in that strange child of her imagination that Frankenstein is.

I publish a post every Tuesday (follow @SaraMartinUAB). Comments are very welcome! Download the yearly volumes from: My web:


May 6th, 2019

These days I’m teaching Frankenstein (1818, 1831) and writing about one of its thousands of descendants, Richard K. Morgan’s Thin Air (2018). As science and technology advance and speculative fiction gets closer to everyday life (or perhaps the other way around), writers imagine creatures that would have baffled Mary Shelley. The newer creations are some times categorised as monsters, some times as freaks, depending on whether they appear to be capable of overwhelming Homo Sapiens or just contribute an exciting sense of difference to the narration where they appear.

Having written my doctoral dissertation on monstrosity ( I know that taxonomies have limitations, and that a full inventory of the monsters and freaks of one period may have to be reconsidered for the next one. I’m also aware that many of us, scholars, are still too wary of reading gothic, fantasy, and science fiction, preferring instead to read theory, which is always safer to quote and sounds more properly intellectual. I believe this is the root of two serious problems: the use, abuse, and misuse of concepts such as cyborg and post-human in an abstract way, without much consideration of the particularities of the fantastic characters in question, and the tenacious but incorrect overlap between human and Homo Sapiens.

When Mary Shelley’s Frankenstein was still a curio often attributed to Percy Shelley and nobody dreamt of making this novel an integral part of university courses on Romanticism, British SF writer Brian Aldiss and his co-author David Wingrove declared in Billion Year Spree (1973) that Mary was the ‘origin of the species’. They praised her work as the very foundation of SF, at the same time arguing the thesis that since Frankenstein is gothic fiction, SF’s essence is shaped by horror – not necessarily that inspired by scary monsters but by the sublime fear that we may feel if we stop to consider the universe and our place in it. Aldiss was so in love with Mary that he published in the same year 1973 a novel, Frankenstein Unbound, in which he fantasises about meeting her (Joe Bodenland, his delegate in the text travels from the future to give Mary a copy of her novel…). I wrote already many years ago an article about Roger Corman’s rather crazy film adaptation (, one of the many films that have toyed with the motif of the monster made to be better than human but condemned to being hated.

Now that Frankenstein is part of our syllabus, my personal choice has been to ask my students to present in class a brief text about a film that connects with Mary’s creation. If all goes well, I might publish their work later this summer and offer a nice guide to this peculiar sub-genre. Now, as part of class activities I’m doing some necessary close reading, during which I had quite a big surprise. It’s funny how reading aloud reveals layers of meaning that go unnoticed in silent reading. I was reading this central passage from Chapter IV, in which Victor narrates how he made his man, when I stumbled upon a word I had not noticed before. See for yourself (this is the 1831 edition, at Project Gutenberg):

I collected bones from charnel-houses and disturbed, with profane fingers, the tremendous secrets of the human frame. In a solitary chamber, or rather cell, at the top of the house, and separated from all the other apartments by a gallery and staircase, I kept my workshop of filthy creation; my eyeballs were starting from their sockets in attending to the details of my employment. The dissecting room and the slaughter-house furnished many of my materials; and often did my human nature turn with loathing from my occupation, whilst, still urged on by an eagerness which perpetually increased, I brought my work near to a conclusion.

The word is ‘slaughter-house’. The man (not creature, not monster) that Victor manufactures is made of the pieces of human dead bodies but, here is the surprise, the passage hints that animal parts are also used for his body. Possibly, many scholars have already commented on this rather shocking issue, but I had simply not noticed. I don’t recall, in any case, a passage in the novel which discusses the non-human components that contribute to making the new man. Possibly, H.G. Wells did notice the presence of the slaughter-house next to the dissecting room, and the charnel house, and his is where his hybrids come from in The Island of Dr. Moreau (1896).

Before the passage I have quoted, Victor declares that he is motivated by a straightforward patriarchal fantasy: ‘A new species would bless me as its creator and source; many happy and excellent natures would owe their being to me. No father could claim the gratitude of his child so completely as I should deserve theirs’. There is a hilarious moment in the episode of The X-Files (5.5) The Post-modern Prometheus (1997) in which Mulder enthuses about the possibility of creating life which imitates humans, as a mad geneticist he has just met is doing. Always a cool-headed pragmatist, Scully replies that this already exists: it’s called reproduction. The passage I have quoted is, of course, usually read as a sign of Victor’s arrogant bid to try to replace God or, from a feminist angle, to usurp women’s power to create life. Once you become aware of transhumanism, however, Victor can be read as a transhumanist and the other way around: transhumanism appears characterised as the patriarchal aberration it is when you read Frankenstein.

Now it is time to discuss labels. To begin with Victor correctly refers to a ‘new species’ and not a ‘race’. We are Homo Sapiens, which is a species of the genus Homo. This genus and the genus Pan (chimpanzees, bonobos) are part of the tribe Hominini, which, together with the tribe Gorillini (gorillas, obviously) conforms the family Homininae. There is currently just one species in the genus Homo but there used to be more, beginning with Homo Neanderthalensis. Scientists do not agree on the definition of the word species for the very simple reason that since species are in a constant state of evolution, fixing them taxonomically makes little sense. They warn us, at any rate, that species differentiation (the process by which a new species branches out from a previous species) is extremely slow, and not visible in historical terms. To sum up, then: a) we should NOT use the word ‘human’ as if it only applied to Homo Sapiens, for it applies to all past and future species of the genus Homo; b) evolution cannot be appreciated in small periods. I’ll add c): evolution is a reaction to changes in the environment and it is therefore quite impossible to imagine, much less say with certainty, how Homo Sapiens will evolve and into what.

Transhumanists, as you possibly know, believe that the evolution of Homo Sapiens should be controlled and that technoscience should be applied to produce better humans. This is exactly what Victor believes and does, even though he had no idea in his pre-Charles Darwin times of evolution (or of genetics!). Victor’s new man has qualities that Mary Shelley calls ‘super-human’ such as an enormous resistance to heat and cold, little need of nutrients (he is a vegetarian!), and a powerful physique that allows him to run fast and leap high. Those who criticise the unlikely way in which he learns to command a language (French, incidentally), and even read, forget that he is no ordinary Homo Sapiens but an enhanced, or augmented man. Following transhumanist tenets, the creature is actually a transitional individual. His children, born of the union with the female that Victor aborts at the last minute, would be the real post-human species. My main objection to this is that the couple’s children would not be post-human but post-Homo Sapiens: still human (part of the genus Homo) but belonging to a different species, as Homo Neanderthalis was different from Homo Sapiens.

Speaking, then, of the post-human is, excuse me, quite lazy. Our future will be post-human only if the genus Homo dies out replaced by some mutated, new animal species (as the franchise of Planet of the Apes is narrating) or by artificial intelligences, in what Ray Kurzweil famously called the singularity. The first-case scenario is quite unlikely, in view of how we ill-treat animals, whereas the second is simply silly. If, as happens with Skynet in The Terminator (1984), a computer goes rogue on us and starts making the combat robots that will end Homo Sapiens, the solution is quite easy: shut down the power grid and the computer with it. This might result in an overnight return to the Middle Ages, or further back, but we tend to forget that, for instance, the Roman Civilization did very well with no electricity.

If, as the passage I have quoted earlier on suggests, Victor’s new man is a transspecies human-animal hybrid, then, technically speaking, he is no longer Homo Sapiens and he is certainly post-human. However, most discussions of Frankenstein avoid the animalist angle and focus on the issue of how Victor jump-starts evolution rather than patiently wait for Earth to bring forth the replacement for Homo Sapiens. His man has no organic pieces whatsoever, which means that he is not a cyborg. My personal view is that the creature is a replicant, as he is 100% organic but made in a lab rather than born out of a woman or an artificial incubator. Like the replicants of Karel Čapek’s pioneering play R.U.R. (Rossum’s Universal Robots, 1920) and those in Blade Runner (1982), Victor’s man awakens to life as an adult – he’s never a baby. Unfortunately, the word robot, introduced by R.U.R., has also caused much confusion, for although in the play it simply means ‘worker’ (its meaning in Czech) in the popular imagination it was coupled with the older notion of the automaton, hence generating the modern idea of the robot, a fully mechanical, non-human, machine. In the famous 1931 film version of Frankenstein, the creature was presented as an inarticulate, lurching, stiff individual, which hinted that there might be a hidden mechanism in his body, as automata have. He looked, in short, cyborgian, rather than totally human.

The problem with the cyborg, or cybernetic organism, a concept invented in the 1960s but mostly popularized in the 1980s, is that it connects poorly with genetic engineering. Take the protagonist of the novel by Richard Morgan which I’m writing about. Hakan Veil is sold into indentured work by his impoverished mother when he is still in her womb. He is heavily modified by means of genetic engineering and digital implants to become a super-soldier of the kind needed in interplanetary travel to quench possible insurrections. The corporation that employs him also transforms him into a hibernoid, that is to say, a person who sleeps four months a year but that can be deployed day and night during the remaining eight on board spaceship. Whereas digital implants cannot be inherited by the offspring of cyborgs, genetic modifications are quite another matter. This is the reason why cyborg is an insufficient label to describe Veil. He has no children and we cannot know whether his mutations would be automatically inherited by his offspring. If this happened, and the children were extremely different from Homo Sapiens, then they would be a new Homo species – but still human, just as Veil is fully human despite being a weird type of Homo Sapiens.

I believe that Mary Shelley was absolutely right to warn readers against the transhumanist project of creating post-Homo Sapiens life, and also that Morgan is likewise absolutely right to warn that transhumanism will make slaves of us, and not free human beings. The difference is that, logically, whereas the vocabulary I am applying to Frankenstein was unknown to its author (the label science-fiction appeared in the 1920s), contemporary authors like Morgan are discussing transhumanism with a remarkable knowledge of what it implies. Like Victor, the transhumanists expect the new species they want to turn Homo Sapiens into to be grateful but, again like Victor, they are making decisions that involve all of us without asking for our opinion. Perhaps, strictly speaking, the first transhumanists were the Homo Sapiens individuals who decided that having, as humans, the whole Earth to us was a pretty good idea. I’ve never ever believed for a second that Homo Neanderthalensis simply died out… Just recall that for them we, Homo Sapiens, were the others… the post-humans that would replace them. And so we did.

I’ll leave philosophical post-humanism for another post… or rant.

I publish a post every Tuesday (follow @SaraMartinUAB). Comments are very welcome! Download the yearly volumes from: My web:


April 8th, 2019

I have spent a good portion of my morning today working on a talk I’m giving next month at the Universidade de Santiago de Compostela. The topic is Cultural Studies, specifically my point of view on their evolution in Spain. As happens, I was invited ten years ago to lecture on this very same topic and this neat figure offers a good chance to consider what has happened in the last decade. Since I have decided to use only one third of my talk for an introduction to the matter and focus in the other two thirds on a practical example, today’s post has the double function of serving as a complement to the talk and allowing me some room for reflexion.

One obvious problem of Cultural Studies is that we are constantly in need of defining what it consists of, particularly in comparison to ‘Filología’ (I’m using the Spanish word to distinguish it from English ‘philology’, which is the discipline in charge of guaranteeing the preservation of ancient and old literary texts). ‘Filología’ refers to the study of a given culture on the basis of its language and its literary canon, to the exclusion of other texts either because they are regarded as inferior in quality or because they are not based on writing. Cultural Studies, in contrast, considers as texts worth studying using its multidisciplinary methodology any cultural manifestations susceptible of being read and interpreted. I’m aware that this is a tautological definition for Cultural Studies both defines and articulates as texts what it studies. Thus, you might not think of the popular drink Red Bull as a text but the moment you approach it as the object of your research, it becomes a text worth exploring in all aspects of its cultural impact. And no: this is neither Anthropology nor Sociology, it’s Cultural Studies.

Ten years ago the new degrees based on the European Credit Transfer System (ECTS) were implanted in Spain and that seemed a turning point in the evolution of the old-style ‘Filología’ into the new-style ‘Estudios’. Many degrees were renamed in this way, whereas in other cases ‘Filología’ was replaced by ‘Lengua y Literatura’. The label ‘Filología’ still survived in the nomenclature of some BA degrees and in the names of Departments, which did not change. Thus, I work for the Departament de Filologia Anglesa, even though our BA carries the label English Studies (the one, by the way, recommended by our national association AEDEAN). The truth is that ‘Filología’ was not generally dropped from the degree titles because there was a widespread need to extend the field of Cultural Studies but because it was a label unpopular with students. They simply stopped attaching any meaning to it and it was expected that the new labels might help prospective studies to understand better what we do and, hence, enrol in our courses. I have already commented on this question several times here in this blog but I’ll mention again that, as much as like the idea of being a teacher of ‘English Studies’ I find that graduates with that degree lack a professional title similar to the old ‘filólogo’.

Thanks to the efforts of my colleague Felicity Hand my Department has been offering Cultural Studies since 1992 (well, my Department actually means she, our friend Esther Pujolràs, and myself). Prof. Hand was the organizer of the first Cultural Studies conference in English Studies in Spain, back in 1995–probably an absolute first in Spain. She was also a member of the core group, together with Rosa González (UBarcelona), David Walton (UMurcia), and Chantal Cornut-Gentille (UZaragoza), which founded the Culture and Power conferences, of which the UAB meeting was the first. There were fifteen annual and biannual meetings until 2015 and the same number of proceedings volumes. Besides, Antonio Ballesteros became in 1998 the first coordinator of the Cultural Studies panel for the AEDEAN conference and in 2001 the Iberian Association of Cultural Studies (IBACS) was born. Other associations, such as SELICUP (Sociedad Española de Estudios Literarios y Cultura Popular) also welcomed Cultural Studies, though coming from a very different perspective.

My personal impression is that the implantation of Cultural Studies in Spain is a partial failure. On the one hand, if you check the titles of the post-2009 BA and MA degrees offered in Spain, you will notice an increase in the number of titles that do use the label ‘Estudios Culturales’. On the other, not a single Department has taken yet that name and I know of very few scholars who call themselves Cultural Studies specialists. If you care to check Dialnet you will find a list of publications on the topic (see for instance Chantal Cornut-Gentille’s Los Estudios Culturales en España: Exploraciones teórico-conceptuales (2013) and the proceedings of the I Congreso Internacional de Estudios Culturales Interdisciplinares Cultura e identidad en un mundo cambiante (2018)) but I don’t think that Spanish academia has truly accepted Cultural Studies. It is a still marginal discipline.

This marginality is usually attributed to the conservatism of the Spanish university and I would agree that this is indeed the case. I marvel that in 2019 students are still being told that no dissertations should be written on Harry Potter or Star Wars but this is still happening. It is my belief, however, that other factors need to be considered.

To begin with, although the Culture & Power circle, to which I myself belonged, did much to introduce Cultural Studies into ‘Filología Inglesa’ we had zero impact in Spain because our publications were in English, including David Walton’s excellent handbook Introducing Cultural Studies (2007) published in Britain by Sage. The language is a barrier but so is the territorial division of the Spanish university. Thus, the Universidad Carlos III offers a programme in Cultural Studies which combines aspects of the degrees in the Humanities, Sociology, and Media (both Journalism and Audiovisual Communication) but no ‘Filologías’. In Britain the degrees in Cultural Studies are closer to Media Studies than to English but the case is that those of us who had first access to the original bibliography in English failed to connect with other areas of study in Spain and make ourselves visible. This has generated strange distortions: for me, any text originally in English is part of the field of Cultural Studies, whereas for many Spanish specialists in Media Studies I might be guilty of intruding into their field by exploring texts which are not literary (like film, series, or videogames). As long as we publish in different languages this is not a problem but territorialism has certainly prevented us from establishing a common ground.

On the other hand, despite the efforts of AEDEAN to make networking more fluid within English Studies we still suffer from a chronic state of disconnection. By this I mean that since the Ministry does not publish the list of R+D+I subsidized projects in any accessible way (you need to check the Boletín Oficial del Estado year by year) it might well happen that your neighbour in a close-by university is doing very similar research without your knowing about it. There have been, then, several groups doing Cultural Studies without being aware of each other, and without being even aware of the existence of IBACS and of the Culture & Power seminars. This means that many young researchers in the field have no idea about how their path has been eased by us, the senior researchers, nor is there a sense of tradition but a constant reinvention of the wheel. This also means a certain stagnation instead of accumulative progress.

Within the Culture & Power circle what happened was that, progressively, each of us started focusing more narrowly on our field of interest. To name a few persons, Rosa González put all her energy into Irish Studies and Felicity Hand into Post-colonial Studies. I myself focused on Gothic and science fiction. Each Culture & Power seminar, then, had to have a wide-ranging topic that could encompass many different interests and eventually we started having the impression that the conferences were too open. The reason why they have been discontinued then has to do, if only partly, with their no longer being necessary because other conferences welcome now Cultural Studies specialists. The AEDEAN panel, I think, suffers in contrast from another kind of indefinition: it has a too large presence of papers about Literature and too little research based on other texts, though films and TV series are also present. Of course one can produce Literary Studies with a Cultural Studies approach but there is too much dependence on what I can only call standard Literature.

There is also another matter that I’m not really at liberty to discuss in all detail because I should have to name persons that might feel offended by my partial vision of events. I’ll go, however, as far as I can. Something which is never discussed in academia is how personal feelings affect the expansion and consolidation of research areas but this does affect Cultural Studies. I don’t mean jealousy or anything remotely in that line, I mean a perplexing inability to stay in touch and go on working together. Or to click, for lack of a better word. What appeared to be promising connections failed in a variety of cases. Persons who seemed friendly had an aggressive agenda in mind, others remained friendly but oddly inapproachable. Then, within the group we may have made mistakes, such as not taking the road to become a research group, for which I myself have been blamed. My point of view is that we were too diverse to cohere in the terms which the Ministry requires and I still think this is a correct perception, but maybe this is me being mulish. I also think, and I’m sorry to say this, that we lacked a strong leadership. As you can see, I’m talking about the past because in a way the generation that introduced Cultural Studies into English Studies in Spain is approaching the end of their careers and the ideas that have been abandoned will not be retaken.

I have then a bittersweet feeling: I think that English Studies is the study area most welcoming to Cultural Studies in Spain but I don’t think this means that it is fully consolidated. I’m happy to see that researchers apply its methods often without being 100% aware that this is what they’re doing but I worry about the backlash from right-wing academics constantly arguing that Cultural Studies is nothing but left-wing activism. Of course, that’s the whole point–questioning how ideas and values are formed, though the politics of Cultural Studies are a matter for another post.

To conclude, I should say that whereas Cultural Studies as we practice them in English Studies in Spain has radically changed the way we think of identity–exploring nationality, ethnicity, gender, age and other factors–there is still an obvious shyness about breaking textual barriers and fully accepting variety. Popular music and videogames are still mostly virgin territory, and as I know first hand as a reader of science fiction, not all genres are equally appreciated.

I don’t think that we are ready for Literature to take less space in our degrees (and research) but this is happening in the world around us and sooner or later we’ll have to consider this question. And truly welcome Cultural Studies.

I publish a post every Tuesday (follow @SaraMartinUAB). Comments are very welcome! Download the yearly volumes from: My web:


April 1st, 2019

Obsessing about how each of the great six male Romantic poets made a living is not the most orthodox way to approach them. It is now John Keats’s turn and, once more, this is, I think, a very relevant issue.

I’ll begin, then, by mentioning Keats’s guardian Richard Abbey, the man who put in charge of young Keats’s education in the absence of his father (a successful ostler-keeper who died when the boy was eight) and the mother (dead when he was fourteen). Keats was born in what might be called a middle-class family and he received a rather good primary and secondary education at liberal John Clarke’s School, happily for him away from Eton (where, remember, Shelley was mercilessly bullied). The headmaster’s son, Charles Cowden Clarke, awakened the love of poetry in the child Keats but, very sensibly, Abbey chose for his not-too-rich ward an apprenticeship, at age fourteen, with a surgeon/apothecary. After this, Keats enrolled as a medical student at Guy’s Hospital in 1815 (aged twenty). Keats started publishing poetry in 1814 and by 1816–already in possession of an apothecary’s licence, which could have led to his being a physician and surgeon–he decided to abandon medicine (and not because he lacked a talent for it).

Abbey’s fury is easy to imagine, not only because a great deal of money had been invested in Keats’s training but also because the poetry market could by no means guarantee a living. For whatever reasons the legacies of his mother and grandmother, which could have freed Keats from the need to do paid work, were left unclaimed and he mainly depended on Abbey’s generosity and that of his friends to progress in his career as a poor, bohemian poet. Sales of his three poetry volumes were very low and Keats basically lived in poverty–his lack of prospects was the reason why he could not marry his beloved Fanny Brawne (she was in mourning for him for six years and only married twelve years after his death). The difference with Lord Byron and Percy Shelley, who were also poets of subsidized leisure, is that unlike Keats they had a title and an upper-class family background to rely on. It is actually quite extraordinary that Keats launched himself as a poet in his social situation–Wordsworth, remember, accepted a position as head post-master once he was a family man, Coleridge had a patron. Keats, of course, died too young, aged only twenty-five, to be in a similar position as a husband and a father and so he embodies–even more closely than the celebrated poet Thomas Chatterton (1752-1770) who died by his own hand aged only seventeen–the myth of the young genius taken too early by death.

Read in hindsight, Keats’s biography makes perfect sense: he quit medicine to focus on poetry for a short career of just five years (1816-1821) as if he knew that he was going to die. The point, though, is that he did not know that he would die young. Keats chose poetry because he felt strongly entitled to earning an immortality for which the only foundation was his own strong belief in his talent. Is this wrong? Isn’t this the stuff of every Romantic dream of being an artist? Yes it is, but let me ask this question: if Blake could produce his amazing oeuvre while still working long days why did Keats see his poetical vocation as incompatible with the pressing need to make a living? What if he had turned out to be a bad poet? How many others have destroyed their lives following the myth of the artist devoted to his/her art? And no, I don’t have a child asking me to be an artist full time… I’m just making the point that these choices and his early death do not constitute a tragedy. They are part of the socio-economic subtext of Romantic poetry, a genre which depends very heavily on a youthful sense of leisure financed by others (mainly patient, devoted, besotted family and friends), though this is hardly ever commented on. Would I rather not have Keats’s poems? No, of course not–but I’d rather not pass on as Romantic myth what was a very snobbish view of paid work as a sort of humiliating activity.

Like Shelley, Keats was only known among a small coterie in his lifetime. He emerged as a poet at a time when Wordsworth, Coleridge, and Byron were already stars and was therefore seen as a member of a school about to peak. His poetry was not particularly well received, to the point that Byron established the myth that what actually killed poor Keats was the negative reviews of Endymion, the poem Keats saw as his masterpiece. So much for legend. Keats actually died for lack of antibiotics, only available from 1945 onwards, and because the poorly understood nature of tuberculosis led to appalling medical treatment (believe it or not, patients like him were bled… as if they needed to lose even more blood). Fanny could not do for Keats what Mary did for Percy Shelley with the post-humous edition of his poems and, apparently, none of Keats’s friends could agree on how to approach his biography. There were scattered comments and even Shelley’s monumental poem “Adonais” (in fifty five stanzas!) but it fell to Victorian admirers who had not met Keats in life to write his biography. Incidentally, Alfred Tennyson was one of Keats’s main champions.

Allow me to stop for a while at Andrew Motion’s 1997 biography, which came after a long silence of thirty years on Keats’s life (by the way: the most recent biography is Nicholas Roe’s 2012 volume). Motion is a first-rank poet who simply loves Keats and so, though no scholar, he published his book as a heart-felt homage. Interestingly, his efforts elicited a furious attack from American leading poetry scholar Helen Vendler, who absolutely hated Motion’s style: “There is an odd mixture, in his chapters, of the old vocabulary of appreciation with the newer vocabulary (never adequate to poetry) of materialist criticism” ( Vendler was incensed by Motion’s discussion of issues connected with class, gender, and race, believing that scholarly comment on the poems should have been the main focus. I’m myself a cultural materialist (hence my comments on the Romantics’ income) so I cannot sympathise with her point of view. I mention her vitriolic attack because it gives us a chronology for when scholarly analysis started being inextricably mixed with Cultural Studies concerns: the late 1990s.

The ‘materialism’ attached to any literary career is, sorry Vendler, very important. It connects not only with the material production of the editions that help to canonize authors and keep them alive but with many other aspects also worth considering like heritage and adaptation. Keats is right now a text beyond the texts he produced, as we see in the way his person is recalled beyond the scholarly analysis of the poems. The road to his canonisation was fully established by Life, Letters, and Literary Remains of John Keats (1848), edited by Richard Monckton Milnes, himself a poet in the Apostles Club which also included Tennyson. But the poems are just part of Keats’s construction as a Romantic icon. His memory is, intriguingly, also celebrated in two houses he never owned: Keats’s House in Hampstead (London) was the property of his friend Charles Wentworth, he just rented rooms there; the Shelley-Keats House in Rome is not even connected to his writing, as Wentworth’s house is, but to his death–this is where his journey south seeking a warmer climate ended. Shelley, by the way, never shared a home with Keats so it’s funny that their names have been linked.

You can enjoy on YouTube the introductory video that Keats’s House offers its visitors ( and compare it to another production of similar length and content, ‘The strangely encouraging life of John Keats’ ( to consider: a) how a life can be summed up in just nine minutes (in both cases); b) which aspects are highlighted (consider the mother’s role). As for the house itself, I had great fun watching vlogger Jesse Waugh’s report, also nine minutes long ( The age of the amateur documentarian is just wonderful! Watch next the official video ‘A Walk Through the Keats-Shelley House with Giuseppe Albano’ (, a nice five-minute piece designed to… ask for funding from admirers. Then wonder a) why we visit these places at all, b) what type of fetishism they depend on, c) whether seeing the locket with Keats’s hair, and his life and death masks, illuminates our understanding of the poems. I have visited the house in Hampstead and, yes, you do get that funny feeling of ‘my, this is where Keats wrote his best poems’ but, then, each of these museums is an artificial construct that caters to aspects of fandom quite tangential to the persons there celebrated. Or are the museums central and the poems tangential?

The heritage industry extends to adaptation usually through the biopic but also through other audio-visual and print products. For all of these the basis is biography, based on its turn on scholarly research. I have already alluded to some films based on the lives of the Romantic poets: Pandemonium (Wordsworth and Coleridge), and a few dealing with Byron and Shelley–Gothic, Rowing in the Wind, Enchanted Summer, Mary Shelley… John Keats has received the attention of New Zealander film director Jane Campion, known mainly for the Victorian drama The Piano. Her film Bright Star (2009) is based on Motion’s biography but focuses specifically on the doomed romance between Fanny Brawne and John Keats. I cannot offer an opinion since I have not seen it: as much as I like Ben Wishaw (Keats) the reviews complaining about how boring the film is put me off. And my class prejudices–the impossible love which the film narrates has to do not only with Keats’s failure to make sufficient money to marry Fanny (excuse me!) but also with the gender prejudice that prevented the daughters of gentlemen from making a living. Today, talented Fanny would probably be a fashion designer and it would be her choice (or not) to maintain Keats while he wrote poetry. At the time when they were alive this was not an option and, so, what is presented as a tragic romance is just the product of ugly gender-related social limitations. Biopics tend to do that: focus narrowly on their subject paying little attention to the bigger picture–but, then, socio-economics make no good R/romantic plots.

There is another adaptation which I’d like to mention: the four novels by Dan Simmons, known as the Hyperion Cantos: Hyperion (1989), The Fall of Hyperion (1990), Endymion (1996), and The Rise of Endymion (1997). These are, as it is habitual in Simmons’s work, a heady mixture of science fiction and unbelievably rich literary allusion. Every time someone tells me that SF is a trivial genre, I ask them to read Simmons and then get back to me. In an often quoted interview (, Simmons comments that ‘In fact, when I first started writing Hyperion, I knew I’d have to deal with Keats’ long poems, “Hyperion” and “The Fall of Hyperion”. I really appreciated his theme of life evolving from one race of gods to another, with one power having to give way to another, as Hyperion must”. You don’t need to have read all of Keats to follow Simmons, but the more you know the better you can catch the allusions–and enjoy Keats’s presentation as an immortal ‘cybrid’ (a mixture of clone and AI). In another novel, in this case by Tim Powers, The Stress of her Regard (1989), Byron, Shelley and Keats encounter their terrible muse and are vampirised by her and the even more terrible Nephillim…

How about Keats’s poems? Two very quick comments, as I have room for no more: it is really amazing that he is remembered by a very short list of pieces (mainly the odes), and, now that we have lost the art of letter-writing to whatsapp and even more criminally illiterate social media, it is important to recall that Keats was a magnificent writer of letters. His intellectual work is scattered among them (he did not write other essays) but so is his love life–the letters he addressed to Fanny Brawne are pure poetry though in prose. And everyone agrees that he was a poet of sensuality, which is why the Pre-Raphaelite painters took inspiration from so many of his poems–another form of adaptation.

Keats chose ‘Here lies one whose name was writ in water’ as the epitaph for his tomb in the Non-Catholic Cemetery for Foreigners in Testaccio, Rome (where Shelley’s ashes are also buried), believing he had failed in his bid to conquer immortality. His friends added more words, claiming that Keats died ‘in the Bitterness of his Heart at the Malicious Power of his Enemies’, thus perpetuating the legend that he was killed by his mean reviewers. This is a point often noted by introductions to Keats (in which, Richard Abbey is unanimously characterised as a villain) but it may be about time to consider this epitaph from the opposite point of view: Keats’s view of himself as a man who could reach immortality through poetry is an extraordinary stance to assume, and we need to deconstruct it as part of the Romantic myth. I’m not trying to kill off personal aspiration or deny Keats’s right to make the most of his talent. I’m baffled by the economic dependence though what truly irks me is the implicit celebration of bohemian, self-chosen poverty as part of the Romantic act of creation. Surely, among the truly poor there may have been one or two Keats, maybe three… but they never ever dreamed of immortality, how could they? For them having a Richard Abbey to help them would have been enough dream–but I’d rather stop here before I make myself totally unable to read Keats… Damned cultural materialism!!

I publish a post every Tuesday (follow @SaraMartinUAB). Comments are very welcome! Download the yearly volumes from: My web:


March 25th, 2019

The title of my post today is intended to be ambiguous: I mean to say that it is thanks to the love of his wife Mary that Percy Shelley is celebrated as a major poet, and that both he and all poetry readers must thank her for her efforts. As she wrote, ‘He died, and the world showed no outward sign. But his influence over mankind, though slow in growth, is fast augmenting; and, in the ameliorations that have taken place in the political state of his country, we may trace in part the operation of his arduous struggles’. Yet, while it is true that Percy Shelley’s post-humous fame was based on the gradual discovery that his texts were politically relevant and inspirational for later times, his writings would not have survived without Mary’s editorial intervention and her determination to make them be known.

Percy Bysshe Shelley (1792-1822) died shortly before his thirtieth birthday. He had published a long list of volumes (about twenty) including poetry, drama, fiction, and essays but Shelley was known only by a small circle of connoisseurs. He had no public fame in life comparable to that which his good friend Lord Byron enjoyed (if that is the correct word) though he had a high impact among those who knew him. Famously, Byron said of Percy Shelley that ‘I never met a man who wasn’t a beast in comparison to him’, which suggest he was also well liked as a person, not only as an artist of the word.

Percy drowned in the sea, near Livorno in Italy, where he lived since 1818, in a boating accident that was the product of imprudence and poor seamanship. A fierce summer storm caused his poorly build ship, the Don Juan, to sink. Percy, his friend Edward Williams, and boat boy Charles Vivien had no time to react. Shelley’s much disfigured remains washed up on the shore eventually and he was cremated on the beach following Italian quarantine laws. Legend, established by the mendacious Edward Trelawney, has it that his heart survived the burning, though Duncan Wu argues that the cherished relic is possibly a piece of the liver… No matter. His devastated widow–who was only 24 and had also lost three children–set out to make sure that the memory of her husband survived for posterity, with the help of devoted friends like Leigh Hunt.

In 1824, two years after Percy’s death, Mary published Posthumous Poems of Percy Bysshe Shelley, a lovingly assembled volume which shows her accomplished editorial skills (she worked in some cases with almost undecipherable manuscripts). Amazingly, Mary had to withdraw this book from circulation at the request of her father-in-law, Sir Timothy Shelley. He adamantly refused her the right to publicise the details of his son’s complicated private life, fearing that scandal would hurt the snobbish family. Percy’s father only relented when he was approaching ninety, apparently out of affection for his grandson Percy Florence, Percy and Mary’s only surviving child. Sir Timothy finally allowed Mary to publish in 1839 The Poetical Works of Percy Bysshe Shelley, on condition that she did not include a biography. Mary added abundant notes to the poems that can be read as a sort of covert life of the poet. She had no doubt that her notes would tell the truth about the man, for she had ‘the liveliest recollection of all that was done and said during the period of my knowing him’.

I’ll get back to Sir Timothy later but I’d like to stop first at Mary’s ‘Preface’ for the 1839 anthology. My opinion about Percy Shelley is no doubt coloured by the negative view transmitted by my dear teacher Guillermina Cenoz that he was, basically, a selfish man. She saw Frankenstein as a work which Mary wrote mainly aiming to secretly expose and punish her husband’s artistic career and personal self-centredness for the cost it meant to family life. I tend to agree with her view, for Percy’s biography is, besides, full of his palpable need to get attention from adoring women: not only his two wives (Harriet Westbrook and Mary Wollstonecraft Godwin) but also other women present in his life as intimate friends, such as Jane Williams.

It is often supposed that Percy was a practitioner of free love, and that he not only had liaisons with other women but also that he tried to have Mary involved with other men. I think this is part of our constant over-sexualization of every close relationship and that Percy was, rather, a man who craved for emotional attention. Of course, what do I know? It occurs to me, though, that if he had misbehaved in a very serious way, Mary would not have made the effort of producing the two volumes (the second edited while her health was seriously impaired). She would not have written, either, the preface for, though she speaks of fulfilling a duty, nobody really expected her to do anything for her late husband.

Mary’s preface has been accused of sanitizing Percy and offering an angelic view of the man. She called him ‘a pure-minded and exalted being’ and though she referred to his brain and not his body, her hagiography, which led to Shelley’s canonization in Victorian times, is only now being contested from a more politically-oriented stance. It is important to recall that Mary was writing under the strict surveillance of Sir Timothy and that a loving widow (she never remarried) is not probably the most impartial judge of her dead husband. I find, however, the preface as candid a view of Percy as was possible under the circumstances and I don’t think, anyway, that an artist’s widow in more recent times would produce something substantially different in tone and intention. It is also interesting to note that the efforts of Mary’s own father, William Godwin, to honour his dead wife’s legacy, Memoirs of the Author of A Vindication of the Rights of Woman (1798) caused much outrage because of his outspokenness. This was no doubt a precedent Mary had in mind.

Mary begins mentioning the ‘obstacles’ now ‘happily removed’ which allow her to ‘fulfil an important duty,—that of giving the productions of a sublime genius to the world, with all the correctness possible, and of, at the same time, detailing the history of those productions, as they sprang, living and warm, from his heart and brain’. She will offer no comments on his private life, ‘except inasmuch as the passions which they engendered inspired his poetry’. A bit mysteriously, she writes that the time ‘to relate the truth’ has not come and she will not, anyway, offer a convenient version. ‘Whatever faults he had ought to find extenuation among his fellows, since they prove him to be human; without them, the exalted nature of his soul would have raised him into something divine’. To err is to be human, then, though we will never know to what faults Mary referred. And why should we?

Now, for the main qualities: ‘First, a gentle and cordial goodness that animated his intercourse with warm affection and helpful sympathy. The other, the eagerness and ardour with which he was attached to the cause of human happiness and improvement; and the fervent eloquence with which he discussed such subjects’. Mary launches then into presenting Percy as a man fully committed to the cause of political freedom, with utmost passion: ‘any new-sprung hope of liberty inspired a joy and an exultation more intense and wild than he could have felt for any personal advantage’. These words were written after the passing of the 1832 Reform Act, the first timid step into the widening of the franchise to all male voters in Britain, and Mary stresses that decades before, when her husband was politically active, defending any kind of freedom was a risky enterprise. Percy’s poetry reflects ‘the determination not to despair’, against the tenet that Romanticism is the expression of despair.

Mary argues that Percy’s poems are of two types: ‘the purely imaginative, and those which sprang from the emotions of his heart’. Of the second type, the ‘more popular’, she writes that they were the expression of personal feeling that, while running deep, he was ‘usually averse to expressing (…) except when highly idealized’. This is puzzling for it suggests that Percy’s ‘intensity of passion’ and ‘extreme sensibility’ were better manifested in the poems than in person. Mary refers to finding fragments of unfinished poems with manifestations of his deep self of which she was not aware but, then, every person leads a secret emotional life not even available to their spouses. Interestingly, she mentions that Percy himself valued the ‘metaphysical strain’ expressed in the less popular poems above the personal effusion: ‘He loved to idealize reality; and this is a taste shared by few’, though she trusts that there is plenty in his Platonic poems ‘that speaks to the many’.

Mary, born in 1797, was forty-two when she wrote the preface, thirteen years older than when Percy died. She grants that ‘there is the stamp of such inexperience’ in all his production, for ‘the calm of middle life did not add the seal of the virtues which adorn maturity to those generated by the vehement spirit of youth’. On the other hand, Mary notes that her husband was a ‘martyr to ill-health’, attributing his heightened sensitivity to ‘constant pain’, which made this ‘perfectly gentle’ man often irritable and overexcited. Mary reports that the day before his untimely death Percy declared ‘If I die to-morrow I have lived to be older than my father’, meaning that his body had accumulated in less than thirty years more sensibility and feeling than many others could expect to have in much longer lives. Live fast, die young… and leave a sadly destroyed body and an inconsolable widow. A tragedy, really.

Percy Shelley’s family background is that of the gentry portrayed in Jane Austen’s novels. Percy’s paternal grandfather, Bysshe Shelley, was 1st Baronet of Castle Goring (a baronetcy is the lowest title in the aristocratic hierarchy; baronets are commoners with a right to be called Sir). His upward social mobility and that of his son Timothy were secured by means of rewarding matches with rich heiresses (the same tactic followed by Byron and his father). It is often forgotten that upper-class patriarchal masculinity treated sons as chattel to be traded with other equally powerful families, and this is what Percy resisted.

Initially the relationship with his father was good, as proven by the fact that Sir Timothy financed the first four volumes his son published (two collections of poems with one of his sisters, two gothic novels). A disastrous turning point happened, however, when Percy, then nineteen, married sixteen-year-old Harriet, a schoolmate of his sister Helen and the daughter of a coffee-house owner. If the Westbrooks thought the match would guarantee their daughters’ financial and personal happiness they were quickly deceived. Sir Timothy reduced Percy’s allowance to a minimum and the couple survived, together with her sister Elizabeth, mainly by borrowing much above their possibilities. To make matters even worse, Percy had got himself expelled from Oxford shortly before eloping with Harriet for having written the pamphlet ‘The Necessity of Atheism’. He had no degree, no qualifications, and no way of entering any of the gentry-sanctioned careers for men.

Romantic legend has presented the relationship between Percy and Mary as the stuff of beautiful, romantic legend but nothing could be further from the truth–it was, at least at the beginning quite a sordid affair. Percy originally met Mary when she was fourteen and he, then nineteen and recently married, a visitor in William Godwin’s home. Mary’s father was happy enough to receive money from his admirer but he was outraged when, two years later, Percy abandoned Harriet to elope with Mary, then sixteen, to Europe. They may have married there but if this happened then Percy became a bigamist. Harriet had his second child (Charles, the elder was Ianthe) a few months before Mary had her first with Percy, Clara. Mary and Percy could finally marry legally in England in 1816 a few weeks after Harriet drowned herself in Hyde Park’s Serpetine. She was then heavily pregnant, probably from a lover, not Percy. Sued by Elizabeth, Harriet’s sister, Percy lost custody of his two children, who were put in foster care. One can imagine Sir Timothy’s disgust as his son’s behaviour, though he did not come to the rescue in any way, leaving Harriet’s children unattended.

In the preface Mary writes that Percy ‘spurned’ his privileges because he foregrounded his ideological duties. ‘He was generous to imprudence, devoted to heroism’, she writes. I will not deny his idealism but it is important to note that once Sir Bysshe died in 1815, Percy became the beneficiary of an annuity of about £1000. This was not much in relation to the lifestyle of his social circle, which is why Mary and he eventually moved to Italy, where they frequently coincided with Byron. When Percy died, Mary depended on her work as a writer (she published other novels, not only Frankenstein) and on the rather limited help which Sir Timothy gave her for the upkeep of Percy Florence. It seems that one of his conditions was that she never used her name in her publications, to prevent any connection with the surname Shelley (see

The anonymous author of the article I have referenced calls Sir Timothy a ‘mean-spirited, hard-hearted’ man and a ‘forsaker of genius’, an expression I have found nowhere else on Google. This seems a fair judgement particularly since Sir Timothy was indeed aware of his son’s literary talent. The life he intended for Percy was a repetition of his own: a political career as an MP in some rotten borough under the protection of an aristocratic patron and marriage to a landowning heiress. It is easy to see why an idealistic youth like Percy would reject this plan but, of course, the downside of his rebelliousness is that Shelley always depended economically on the men of his family, both his father and his grandfather. This great defender of the workers of England never worked to earn a living, though I grant that he did much work on the literary front. In contrast another idealist young man decided decades later to make the most of his father’s money by running his factory and embezzling funds to start a political revolution. I mean, of course, Friedrich Engels (1820-1895).

Shelley’s idealism and commitment to the cause of freedom are, then, respectable but also the product of his class and privileged circumstances. Mary celebrated her late husband in her preface and the two anthologies but I wonder how she felt about Harriet. It is hard not to sympathise with this poor woman and her children. As a worker’s daughter I myself have a great deal of mistrust against upper-class individuals presenting themselves as liberators, much more so against those who never did a day’s paid work in their life. I may value Percy Shelley’s poetry (I really do) and I might accept that he was not as selfish as my teacher painted him. Still, I have many doubts about Shelley, beginning with whether he really deserved all the love Mary put into the task of ensuring his immortality. And I wonder whether he would have done the same for her.

I publish a post every Tuesday (follow @SaraMartinUAB). Comments are very welcome! Download the yearly volumes from: My web:


March 18th, 2019

In a hilarious moment of the two-part documentary The Scandalous Adventures of Lord Byron (2009) presenter Rupert Everett discusses with Donatella Versace–as they wait for her butler to announce dinner at her own luxury Milan home–whether Byron (1788-1824) was really as handsome as so many contemporaneous testimonials claim. At this point, Everett has already seen diverse portraits and has even donned the same Albanian dress that Byron wears in the famous painting by Thomas Phillips, now at the National Portrait Gallery. Seeing handsome Everett look rather ridiculous in it, the spectator might conclude that Byron was indeed a man of good looks and even better poise. Also, a man who controlled each portrait that was made of him as we control our image in our Instagram accounts. He wanted specifically to look manly, a man of action and not a poet, as Everett notes, and also disguise a limp caused in childhood by polio.

Everett and Versace note that notions of beauty were very different in the early 19th century, suggesting that Byron’s physical appearance would not seem so extraordinary today. I find this quite tantalizing! Everett quips that, on the other hand, Byron must have looked stunning at a time when having all your teeth while still young was not common. At a later point in this second episode the tone changes and becomes a bit less flippant. Rather subtly, Everett’s comments start defending the view that by the time when Byron died, aged only 36, he was past his prime. The infection that killed him was an accident of life, perhaps one preventable, but the documentary hints that Byron’s choice of malaria-infested Missolonghi as his home in Greece was somehow suicidal. It is implied in short that had Byron lived on his life would have been a sad, gradual fall into physical decadence. This is, at the same time, part of the Byron myth: live fast, die soon, and conquer eternal fame. I’m not sure about leaving a beautiful body to bury.

In life, Byron enjoyed fame but he was mostly beset by celebrity and by notoriety–and, of course, scandal. It is fit that Everett, an openly gay man with a pansexual past, presents Byron’s biography, for George Gordon (this is his actual name) was a product of the sexual prejudice of his time or, rather, of its hypocrisy. Just as it seems impossible to discuss Coleridge without mentioning his drug addiction it seems impossible to discuss Byron without alluding to his sexual adventurousness. Likewise, whereas no biographical sketch of Wordsworth is complete without his sister Dorothy, no portrait of Byron can be offered without associating him with his half-sister Augusta Leigh (his father’s daughter by a first wife).

Byron might scream to high heaven that they did not commit incest and that Augusta’s third child Medora was her husband’s and not his but we would still doubt his word, for that is what celebrity and scandal are about: constructing people as we want them to be, not as they are. With lights and shadows: incest may be too much even for us but the pansexual man Everett describes is more to our taste. Funnily, as we dismantle the sexual prejudices of Byron’s time (serious enough to land you in jail for sodomy), we have started criticizing the man for not being handsome enough, and even for being at times in his life rather overweight. Duncan Wu, in particular, offers an image of an effeminate, flabby, shortish, stout Byron totally at odds with the connotations that the word ‘handsome’ awakes in our minds.

Byron was an aristocrat and though not an extremely rich man (he lived on borrowed money, mostly, like most of his class), he led a life of ease and luxury that seems to belong in the 18th century rather than the early 19th. He may be celebrated as a great national hero in Albania and Greece but his mildly Whig politics in defence of nationalism (and even at one point of the anti-Industrial Revolution luddites) are not based on very strong beliefs. It seems, rather, than in a world in which nobody cared for anyone beyond the national borders, Byron’s curiosity and personal presence in remote lands was in itself welcome as a heroic act. His contribution to the independence of Greece was, at best, very marginal and he seems to have been seen during his time at Missolonghi in the early 1820s as just a rich English lord that could be easily milked for his money, if you excuse the expression. He did not die a hero’s death in battle as one might expect from all the exaltation but simply write verse that vaguely endorsed the right of Greece to be a free nation again, on the strength of what it used to be in the classical past. He died, as I have noted, of a fever variously attributed to an imprudent ride in the rain or a bug caught from his pet dog.

If abroad he was a hero, at home Byron was a celebrity of the kind that the Daily Mirror enjoys praising and demolishing in equal parts today. And this what happened to this man: he found himself suddenly famous, as he wrote, after the immense success that the first two cantos of Childe Harold (1812) were, only to be completely ostracized just four years later. In 1816 Byron had to leave England for ever following the scandal of his separation from his wife Annabella because of the rumours about incest with Augusta. Byron was probably one of the worst husbands on record and the separation makes complete sense: his wife, whom he had married for her money as his father had married his two wives, just could not endure the constant humiliation of Byron’s active extramarital life. What is hypocritical is the scandal. Byron often claimed that he had never seduced any woman because he didn’t have to: basically, the women of the Regency period that chased him were the first groupies in literary history, and no wonder, since Byron has often been compared to a rock star. One of the harassers, Lady Caroline Lamb, defined Byron as ‘bad, mad, and dangerous to know’ but probably this is who she, not him, was.

The good looks, the hectic search for sexual pleasure, the journeys to distant lands, the scandalous married life, the more than likely homosexuality and the incest with Augusta… all these are sufficient not for one but for several celebrities. What makes Byron a radically different celebrity from those plaguing our time is that his fame was based on his poetry, for which he did work much harder than he pretended. The sales of his work from Childe Harold onward were in the first years high enough to push best-selling poet Walter Scott out of the market, to the point that Scott became a novelist (though he published anonymously his early novels as if ashamed that they were a second-rank, mercenary product). Byron was particularly well-known because of his narrative verse and he continued enjoying that success even after he had been socially ostracized, from his exile in Switzerland, Italy and finally Greece. To understand how relatively lucky he was, we need to think of the far more tragic fate of Oscar Wilde, a man as flamboyant and sexually curious as Byron but who could not escape, as Byron did, the harsh action of British homophobic legislation. Wilde’s exile in the late 1890s was a much sadder story indeed but, then, he was no aristocrat.

Byron’s main cultural legacy, beyond his poetry and even beyond Literature, is the Byronic hero, a construction that was appended to his own person by his readers whether he wanted it or not. We cannot know what Byron was really like but just as his looks his personality also elicit doubts. He insisted for years that he was not Harold, the character that first expresses the Byronic temper which other male characters inherited–restless, moody, pessimistic, curious about people yet a loner, interested in pleasure but little capable of sustained love. Yet, Byron eventually gave in and granted that in many ways the Childe’s pilgrimage was his own, and Harold a thin mask for himself. Indeed, Byron is all over his poetry, also as Manfred and Don Juan and most of his main male characters, but this is not at all singular. Look at how Wordsworth mined his own youth for The Prelude. I see the appeal of the Romantic construct and why the Byronic hero soon surfaced in many other narratives (mainly novels and plays) giving us Heathcliff, but also Dracula, and even Christian Grey. What puzzles me is what kind of audience Byron had and how they could follow him at all.

I have just finished reading Childe Harold, the four cantos, and I’m not sure how to describe the experience. Last week I told my students that Romantic poetry was published in its time with no footnotes and that the original readers did not expect a critic to decode the meaning, or any obscure passages, for them. We had read the passages in Lyrical Ballads by Wordsworth introducing some of his poems but they were aimed at describing the circumstances that inspired each poem, not the poem itself. Likewise regarding Coleridge and “The Rime of the Ancient Mariner”. We listened in class to Ian McKellen’s beautiful reading of this long narrative poem (about 30 minutes) and though I stopped now and then to make sure students could follow the plot, in general the text was well understood. I’m not in favour of that kind of teaching that turns reading poetry into a forensic exercise, of which you can find plenty on YouTube (a lot from India, for whatever reason!) and I’d much rather my students enjoy the poems they should know about. With Byron, however, I simply don’t know what to do. The booklet we are using includes all of Manfred and Don Juan’s first canto and not Childe Harold but even so the point is the same one: Byron’s poetry is just too obscure for us today, here and in my second-language, second-year classroom.

I did try to read Childe Harold without checking Byron’s own lengthy notes (mostly on points of History, always showing an amazing erudition) or the notes of his editor, which also included notes to Byron’s notes!! It was just impossible: it was like reading through glasses that would suddenly cloud and blind me, but also suddenly disappear altogether, a veritable rollercoaster. Thankfully, Rupert Everett’s documentary follows the journeys by Byron reflected in this long poem and I could make sense more or less of where Harold was at given points but without that aid (and the notes) I would have been quite lost. To my surprise, even though I expected a very intimate portrait of the Byronic hero to connect the diverse observations of the pilgrim, I found the stanzas oddly detached except in the few passages (mainly in canto four) where Harold bemoans his fame and wonders what it will be like once he dies. I positively missed Wordsworth, whom Byron very much disliked, in the stanzas about the landscapes and even the cities. And I had a really tough time understanding allusions to personalities of the 1810s even with the editor’s excellent notes. There was also the problem of when to read the notes, for they constantly interrupted the flow of the lines. I eventually settled on reading them after each stanza. When I came across six stanzas without notes, it felt like being on a sailing ship with a full gale.

Reading the negative comments on Walter Scott’s first novel, Waverley (1814), I came across a disgruntled reader who, hating this pompous piece of fiction as much as I do, proposes that we ‘decanonize’ Scott. I think that we are already in the process of decanonizing Scott, who has not been included in our second-year 19th century courses here at UAB since at least 1994. Preparing the lectures on Byron I realised that I wasn’t even sure when to tell my students about Scott: now, commenting on his poetry together with Byron’s, or later when we teach Jane Austen. It is very clear to me that an English graduate must know who Scott was but I would not include one of his novels in the syllabus, for that would probably alienate rather than interest students. What I fear is that we have reached the same point with Byron: students must know who he was and what he did, but can they read his poems at all? Perhaps the lyrical pieces like ‘She Walks in Beauty’ but this hardly gives a glimpse of the giant he was.

Arguably Byron (and Scott) are a case not so much of decanonization but of increasingly difficult readability. It’s not the same. Robert Southey may be canonical but we just do not include him in our syllabi, either his poetry or his person, whereas, I insist, knowing about Byron and Scott is essential. This is a typical conundrum for all teachers: how should we teach? On the basis of literary archaeology or on the basis of accessibility? It used to be the former in the ancient times when philology reigned but the more pragmatic current approach tells me that Byron is approaching if not total at least partial decanonization.

I’m not sure that I’m sorry… but that must be my class (and gender) prejudice against privileged male aristocrats, no matter how handsome.

I publish a post every Tuesday (follow @SaraMartinUAB). Comments are very welcome! Download the yearly volumes from: My web:


March 11th, 2019

It has become commonplace to see Samuel Taylor Coleridge (1772-1834) through the lens of his drug addiction, which is why it is perhaps quite wrong to begin this post in this way. His case, however, must be contextualized and his addiction treated as an ailment similar to that currently killing 130 Americans every day and plaguing hundreds of thousands more (see With an important difference: whereas in Coleridge’s time the addiction to opium, and mainly to its derivate laudanum, was poorly understood in the 21st century our experience of drug abuse is already very extensive. This did not prevent greedy pharmas in the 1990s from flooding the market with potent analgesics said no have no side effects while they fooled the corresponding Government agencies.

Coleridge, like most current victims of the American opioid overdose crisis, suffered from chronic pain (connected with rheumatism) and simply needed relief. He most emphatically did not take drugs for recreation and if he had any visions attached to their use, this was not the outcome of any experiment–it was a side effect. Trying to make his body more comfortable Coleridge fell into a downward spiral of drug abuse that even his closest friends misread as vice. Wordsworth broke his long friendship with Coleridge for that reason (though they later reconciled) and if we have such vast textual production from him this is only because one Dr. Gillman took pity on his unfairly abhorred patient. This man and his family provided Coleridge with a home at their Highgate residence in London between 1816 and 1834, helping their illustrious guest to control his addiction as far as possible and allowing his mind to shine free from that burden (at least temporarily) to write, among others, his Biographia Literaria.

A constant in Coleridge’s life is an insatiable craving for knowledge. His father was an Anglican reverend but also the headmaster of the local King’s School at Ottery St Mary’s, Samuel’s birthplace in Devon. From Coleridge’s remembrance of his early childhood as a constant stream of reading, we may deduce that the father encouraged this activity. Reverend Coleridge died when Samuel was 8 and the boy, the youngest of ten siblings from two marriages, was sent to boarding school, at Christ’s Hospital (in London), an experience he did not relish in general. With one important exception, recalled in Biographia Literaria: in that school he “enjoyed the inestimable advantage of a very sensible, though at the same time, a very severe master, the Reverend James Bowyer”.

This man not only gave his young students a formidable education in the classics–combining them with Milton and Shakespeare–but was also an adamant editor of his pupils’ written work, teaching them to aim at precision. As Coleridge recalls, “he showed no mercy to phrase, metaphor, or image, unsupported by a sound sense, or where the same sense might have been conveyed with equal force and dignity in plainer words”. Bowyer did not take half measures: if two faults were found, “the exercise was torn up, and another on the same subject to be produced, in addition to the tasks of the day”. Coleridge still had in adult age nightmares about this man’s severity but he acknowledged his “moral and intellectual obligations” to him. He and his classmates, Coleridge adds, reached University as “excellent Latin and Greek scholars, and tolerable Hebraists”, though this was “the least of the good gifts, which we derived from his zealous and conscientious tutorage”. Reverend Bowyer, though not the kind of teacher we celebrate today, gave his brilliant student Samuel the foundations he needed for his extremely rich intellectual life.

Not all went well at Cambridge for Coleridge, for he never got a degree. Besides, he wasted one year of his youth in the King’s Light Dragoons, a regiment where he secretly enlisted as ‘Silas Tomkyn Comberbache’. He was discharged by reason of insanity (as the regiment papers attest), though other sources note that he was just the most inept soldier ever. Others claim that his brothers rescued Samuel from a personal crisis possibly provoked by an amorous disappointment when one Mary Evans rejected him.

Biographer Richard Holmes explains that Coleridge had many talents but he was above all a fascinating talker. Also, a rambling one, which means that his listeners were often amazed but also confused by the fast flow of his ideas. Coleridge was unable to write them down as they left his mouth and, besides, his manuscripts are known to contain many borrowed ideas he did not acknowledge or, in plain words, many plagiarisms. In any case, whereas Wordsworth’s main talent was as a poet, Coleridge was a much vaster intellect.

To my surprise, he was for a while an itinerant Unitarian preacher and seems to have regarded himself mainly as a theologian, though this is by no means how we think of him today. He was a philosopher deeply influenced by German idealism (which he imported into Britain), a psychologist avant la lettre specialised in the works of the Imagination (or creativity) and of literary creation, and a great literary critic (who, among other achievements, rescued Hamlet from the trash-can of literary history). Wordsworth gave us in The Prelude a whole treatise on the making of the poet, and Coleridge gave in his prose work Biographia Literaria an even more extensive exploration of the same topic. Some of his passing remarks have become key concepts in current culture: the notion that when we read Literature we ‘willingly suspend our disbelief’ comes from a remark in Biographia about Wordsworth’s ‘Preface’ to the Lyrical Ballads.

The question of Coleridge’s source of income must also be considered for, as I have been arguing here, although Romanticism creates a literary market that enables authors like Walter Scott or Lord Byron to invent the very idea of the best-seller, it also depends on leisure afforded thanks to rents or, in this case, patronage. Coleridge abandoned his duties as a Unitarian minister (in 1798, when he published Lyrical Ballads, aged 26) because his friend Thomas Wedgwood provided him with an annuity. Wedgwood, credited today with possibly being the first British photographer (see, was the son of Josiah Wedgwood, founder of the world-famous pottery firm that carries his name. Josiah was a most gifted businessman but also a patron of causes such as abolitionism and his son, also named Josiah (Tom’s brother), continued the family tradition of offering patronage to some artists. Apparently, the annuity was withdrawn in 1812, following the outing of Coleridge as a drug addict (this is attributed to Thomas de Quincey’s Confessions of an English Opium Eater but this book came out in 1821). There is an article (available from JSTOR) about the Wedgwood annuity but this is more detail than I can supply here. I simply don’t know, then, how Coleridge survived after 1812 but my guess is that Tom still helped, and other friends. I don’t know either what the arrangement was with the ultra-friendly Dr. Gillman. Interestingly, patronage used to be regarded as a potentially humiliating relationship of dependence–hence the word ‘patronizing’–but is now back with crowdfunding and platforms like Patreon. Today, I’m speculating, Coleridge could have made a living in this way, though he could also have been offered an academic position as resident poet, or creative writing teacher. Remember he had no degree and could never have become an Oxbridge don.

Coleridge’s private life was not very happy–or, rather, it was rich in friendship but not so rich in women’s love. He married in 1795, aged 22, a girl called Sara Fricker simply because his good friend Robert Southey (the poet) had married her sister Edith. Both couples intended to found a utopian project in Pennsylvania called Pantisocracy, but the mad scheme simply collapsed. Sara and Samuel had four children and separated in 1808, when he was 36. She lived with her sister’s family and later with her son Derwent (check They never divorced.

It is odd to think of Sara struggling to make ends meet while her husband enjoyed the beautiful English landscape or stayed away for one year in Germany, all with the Wordsworths. Their baby Berkeley died while the father was away and he did not return home for the funeral. The elder, Hartley, was a constant problem for her parents. I should have thought that Dorothy Wordsworth was Samuel’s secret love, and the most evident way to bond with William beyond friendship but, apparently, Samuel fell in love instead with William’s sister-in-law, Sara Hutchinson (his wife Mary’s sister). Actually, this happened in 1799, before William married Mary, and the unrequited love story continued for many years. Sara also lived with the couple (and with Dorothy) until her death in 1835 and there was much occasion to meet. She was a good friend to Samuel but, for whatever reason, she never returned his love (see She never married. Samuel died, in 1834, having engaged in no other significant relationship with a woman.

Samuel Coleridge did not have a very high opinion of himself. He refers in Biographia Literaria to his “constitutional indolence, aggravated into languor by ill-health; the accumulating embarrassments of procrastination; the mental cowardice, which is the inseparable companion of procrastination, and which makes us anxious to think and converse on any thing rather than on what concerns ourselves”. His bouts of depression and the constant effect of the drugs (and of the many attempts at withdrawal) certainly could not have helped to develop steady work habits but he was certainly a far more laborious individual than he credits himself for. Under the Wedgwoods’ patronage he spent that frantic year in Germany, furnishing his head “with the wisdom of others. I made the best use of my time and means; and there is therefore no period of my life on which I can look back with such unmingled satisfaction”. He took lectures in diverse universities on an astonishing variety of subjects as he improved his German. And he never stopped learning, which is why Coleridge had opinions on all subjects. He comes across, in short, as a man in intense conversation with himself, of which the rest of his contemporaries were witnesses rather than participants (except Wordsworth for a time). We possibly have in his writings only a mere fragment of what his mind could do.

I haven’t yet mentioned any of Coleridge’s poetry. I’m still processing Iron Maiden’s fifteen-minute-song based on ‘The Rime of the Ancient Mariner’, and the heavy-metal crowds singing the lines in a concert (check the video on YouTube). Amazing, really. Also, the wonder of listening to Benedict Cumberbatch read “Kubla Khan”. That’s the beauty of today’s digital world: it offers much more than kitten videos and ranting if you only care to seek it.

Coleridge would have loved the internet since he was, in a way, his whole life a student–an academic outside academia, so to speak, and not only a poet. He led a precarious life on the financial front and his body kept his mind chained to drug abuse for long years. Even so, he managed to produce extremely relevant literary and intellectual work out of insatiable curiosity. This is why it is so painful to read the many comments that accompany the videos on the Romantics on YouTube.

Not the Iron Maiden video, which everyone watches for pleasure, but videos such as Peter Ackroyd’s BBC mini-series ‘The Romantics’, which many students watch as compulsory homework. A man, as disappointed as I am by the rejection of education, bemoans the ‘lack of intelligence’ of the students who complain that Ackroyd’s series is boring. An irritated college student replies that not enjoying something does not mean that you’re not intelligent. I agree: it means you’re not curious–and this is the most common curse today. The albatross around the necks of most students. Coleridge, as his year in Germany shows, was immensely curious. Luckily for him, he had patrons that allowed him to take his curiosity as far as he could and, so, he connected ideas in new ways that have shaped our own world. I wonder what he would make of those who, given the chance to learn by their parents and all of society, reject it–though I think I know.

Romanticism was, let’s recall this, in rebellion against many traditional ideas but, as Coleridge’s case shows, it was a very well-read rebellion, passionate both in feelings and in thoughts. This is something to remember: education empowers individuals and, ultimately, changes the world. Boredom should play no part in this equation. I very much doubt that Coleridge was ever bored. Or boring.

I publish a post every Tuesday (follow @SaraMartinUAB). Comments are very welcome! Download the yearly volumes from: My web:


March 4th, 2019

I shared with my ‘English Romantic Literature’ class the video showing Jon Cheryl perform his musical version of William Blake’s ‘The Tyger’ ( and also Michael Griffin’s song ‘London’ ( based on Blake’s eponymous poem. We agreed that both songs are cool and that, by definition, an author whose work can be enjoyed in this up-dated way is cool. Blake is, no doubt, cool as Shakespeare and the Brontë sisters are cool. Other authors are uncool, and I believe that William Wordsworth belongs to that class.

Julien Temple, who was once a cool Brit director (he shot many music videos for stars like David Bowie), made in 2000 a film called Pandemonium about Wordsworth and Coleridge’s friendship during the time of the French Revolution. Wordsworth was played by John Hannah (how uncool is that?) and Coleridge by Linus Roache (cooler!). The script writer was Frank Cottrell Boyce who later wrote the, definitely, very cool account of Manchester in New Order’s heyday 24 Hour Party People (2002). I haven’t seen Temple’s Pandemonium but an instance of how hard it is to make its subject matter cool is that, apparently, the end credits roll to the sound (or noise) of Olivia Newton-John’s song “Xanadu” (1980) which vaguely alludes to Coleridge’s “Kubla Kahn”. Viewers’ reviews on IMBD are mostly positive (despite the middling 6.6 average rating) and the film might be worth spending two hours of your life on seeing it. Yet, one of the most enthusiastic commendations reads: “A splendid effort which will likely be most appreciated by those into classical literature–particularly 19th century poetry”. This is like recommending, just to name a random first-rate movie, The Right Stuff (1983) mainly to people who are interested in the history of NASA. A movie either works or it doesn’t, and if it appeals to a highly specialised academic audience it doesn’t. A more candid viewer writes “With its utter disregard for the historic record, Pandemonium attempts to do for England’s greatest Romantic poets what Monty Python and the Holy Grail did for the Arthurian legends–but (sadly) without the wit or the humour”.

In Pandemonium, in any case, and also in their friendship, coolness fell on the side of Coleridge with Wordsworth playing second fiddle; he always seems to have been the kind of guy you know is not really into it even when you’re having the greatest fun together. The wonder is not that their friendship started, for opposites attract each other, but that it lasted for so long and that it was even retaken after a serious falling out. I very much suspect that without cool Coleridge–and most likely without Dorothy Wordsworth, the adoring sister–Wordsworth would not be Wordsworth as we know him today. He would be perhaps Robert Southey (who?).

Much of Wordsworth’s uncoolness has to do with his living to old age and in good health. I am aware that this sounds callous and that the Rolling Stones are living proof that one can be a youthful rebel well beyond youth: Mick Jagger and Keith Richards are both 76. If Byron and Shelley had lived to old age instead of dying in absurd, preventable circumstances at, respectively, 36 (infection caught from his dog) and 29 (drowned for sailing in bad weather), they would have probably behaved like Jagger and Richards. The problem with Wordsworth is that he only had that rock-star profile by association with Coleridge and, once he married his childhood sweetheart Mary Hutchinson in 1802, aged 32, he became the anti-Romantic myth: a steady family man. Even his fathering an illegitimate daughter ten years before, during his stay in post-Revolution France, announced that this is who Wordsworth was at heart. He was rash enough to embark on a passionate affair with a Frenchwoman called Annette Vallon, the pretty daughter of a barber-surgeon, but also prudent enough not to marry her when she got pregnant. He was, it seems, a responsible but detached father for the girl, Caroline, but she was kept apart from her English siblings.

Keeping a family of five children, a wife and a sister (Dorothy never married) on the money made by selling poems is not easy. To be precise, Wordsworth never really lived on his modest earnings as a poet. To be even more precise, Wordsworth mainly lived off rents generated by family legacies. His father, a lawyer, was the legal representative of an aristocrat and it was the money this man paid to settle a long-standing debt that generated the rents allowing Wordsworth to marry. Wordsworth, incidentally, had a BA from Cambridge and his family, specially the uncles that paid for his education after he was orphaned at age 13, expected him to become a parson. He, however, would take no profession. Only in 1813, at the tender age of 43, did Wordsworth accept an appointment as post-master and Stamp Distributor for Westmoreland, rewarded by a yearly stipend of £400 per year, which finally ensured the financial stability of his family. They moved then to a beautiful house, Rydal Mount, near Ambleside in the Lake District, where the Wordsworths lived between 1813 and 1850 (it’s now open to visitors). However, the celebrity Wordsworth who received there an endless stream of visitors was not the same man who had written the poetry he was known for but someone else, his mature counterpart.

By the time Wordsworth published Ecclesiastical Sonnets (1822) the transformation was complete. His daughter Catherine and his son Thomas died both in 1812–she in June, he in December–and this must have been a terrible blow, no matter how often we tell ourselves that in past times parents assumed that some of their children would die in childhood. In fact, Wordsworth took the position as a civil servant to make sure that his remaining three children could enjoy the best of lives. Yet something went amiss at the time in his poetical career, as most critics agree, because of his job. It took me a while to understand what exactly Wordsworth did. Anne Frey explains in British State Romanticism: Authorship, Agency, and Bureaucratic Nationalism (Stanford University Press, 2010, p. 55) that Wordsworth did have an office in town and performed numerous professional duties, though not those of a full-time job. “While certainly compatible with Wordsworth’s idea of himself as a professional poet, however”, Frey writes, “the job necessarily took away some time away from Wordsworth’s vocation”. Frey’s sly wording suggests that Wordsworth was not really a professional poet but she struggles not to reveal a basic fact: his poetry emerged from youthful leisure (no matter how hard he worked at his verse) and was far less compatible with an adult working life. In contrast, Blake managed to produce his poems after his daily work routine as an engraver was over, which does sound professional.

I came across a very illuminating article by Andrew Klavan (originally published in 2009 in Romanticon and reproduced here: titled “Wordsworth’s Corpus Reflects the Growth of a Conservative’s Mind”. Klavan grants that “Wordsworth’s conservatism hardened as he grew into middle age, sometimes becoming small-minded”. In 1829 (he was then 59) he protested against the Catholic Relief Act which allowed Daniel O’Connell to be the first Irish Catholic to serve as MP. Wordsworth was a strict Anglican all his life and Anglicans like him feared very much the impact of Catholicism on politics and social life. He did not support, either, the 1832 Reform Act, the first to extend franchise among English men (though only within narrow limits). This is typical: the youthful supporter of revolution becomes an adult conservative when changes in family, personal and professional life make political, economic and social stability desirable. In even simpler terms: one becomes more conservative the more one has to lose. Klavan contends, nonetheless, that Wordsworth regained part of his revolutionary fire later on. In 1846, aged 76, he gave his support to the democratic Chartist movement, though warning that rioting would not help the cause. By then, of course, he was a gentleman pensioner of leisure finally free to indulge in his youthful ideals. And the times were no longer Romantic but Victorian.

Wordsworth was given in 1842 a Civil List pension of £300 a year; he resigned then from his position as Stamp-Distributor. Next year, 1843, he was appointed Poet Laureate, aged 73, replacing Robert Southey and after having received honorary doctorates (by the Universities of Durham and Oxford) in the late 1830s. In the last years of his career as a poet, at the height of his celebrity, Wordsworth worked on his massive autobiographical poem ‘The Prelude’ which was only published post-humously in 1850 by his wife Mary. Actually, Wordsworth started writing this autobiographical poem back in 1798, the year when, aged 28, he published Lyrical Ballads with Coleridge, and kept adding blank verse lines to it until it grew to 14 books, a total of 7863 lines. This does not mean that the poem covers Wordsworth’s whole life–as the title suggests, it deals mainly with its first decades and it is, on essence, a poem on the ‘Growth of a Poet’s Mind’ as the subtitle announces. There is complete critical consensus that ‘The Prelude’ is Wordsworth’s greatest poem but you should read the comments by readers at GoodReads before considering whether you want to read it. I must confess that I have failed to find a valid reason to go through so much verse and no, I’m not ashamed to make this confession even though I teach English Literature. Some other time, perhaps.

No Romantic poet is complete without an oddity in his biography and in Wordsworth’s case this is supplied by Dorothy’s constant presence. There were other three siblings (John drowned at sea in 1805) but she and William, born only one year earlier, seem to have been constant childhood companions until their father died in 1783. The girl, aged 12, and the boy, 13, were then sent to the homes of different relatives and were only reunited in 1795, when she was 24 and he 25. They never separated again, sharing their diverse homes even when William married Mary. Many have read their relationship as incest and a few sexist scholars have even blamed hysterical Dorothy for it, presenting her as a needy woman who hindered William’s path with her demands. This sexualized view of their siblinghood is, I think, plain silly and only reveals that sex occupies too much space in our minds. William and Dorothy were comfortable with each other, they shared many ideas and observations also present in his poems (as her journals have proved), and were perfect companions at a time and in a society when a man and a woman could enjoy friendship in total freedom only as siblings. Mary welcomed her sister-in-law to the family home and the couple took good care of Dorothy when, in the 1830s, she became an invalid. She died in 1855, outliving William by five years. It’s a bitter-sweet tale.

A surprised GoodReads reader sentences “Turns out I like ‘The Prelude’ a lot. But I still wouldn’t invite Wordsworth to a party at my place”–yet another sign of his uncoolness. Wordsworth might then be a category to himself: the kind of author you profoundly respect but do not enthuse about; the type you admire because you can see the man is making an effort. He is not Milton–I still haven’t met a person who would like to meet Milton for coffee much less at a party–but he is not either, definitely, Blake. He is Wordsworth.

Coolness moves in mysterious ways.

I publish a post every Tuesday (follow @SaraMartinUAB). Comments are very welcome! Download the yearly volumes from: My web:


February 25th, 2019

Tomorrow I’ll be introducing my class in ‘English Romantic Literature’ to the pleasure of discovering William Blake (1757-1827). I haven’t taught this course in fifteen years and, so, I needed to re-discover Blake myself, re-learn the basics I must transmit. Within limits, careful as usual not to let myself be carried away and use for three hours of lecturing five times that in preparation, or more. We lead hectic lives and even the most interesting tasks need to be restricted, or else risk producing no new research at all.

I’ll mention first a 1995 episode of The South Bank Show devoted to Blake, available from YouTube (for instance The documentary is conducted by novelist and biographer Peter Ackroyd, not by chance: he had just published then his well-received biography Blake, part of a long list that began in 1863 with Alexander and Anne’s Gilchrist pioneering work (of which more, later). The biggest surprise in this documentary is, no doubt, the presence of notorious American Beat poet Allen Ginsberg singing Blake’s poems as he plays a vintage harmonium. This, he explains, is how Blake would have presented his poems to an audience, since for him the figure of the bard of ancient times was essential. Funnily, even though Blake’s best-known works are Songs of Innocence and Songs of Experience, I had missed that the word ‘songs’ has a literal meaning. Leaving this aside, the documentary, about 50 minutes long, made me wonder what the point of classroom lecturing is in the times of YouTube and, generally, the internet. My lectures will borrow, after all, from online sources, including Wikipedia and Google Books. And of course, the simply splendid Blake Archive.

In my times as an undergrad there was no internet, strange as this may sound to current undergrads. I was very lucky, nonetheless, because having heard about Blake in some introduction to English Literature, I could see some of his original drawings in a stunning room of London’s Tate Gallery. This was in the mid-1980s, before Erasmus, when every girl student who wanted to learn English spent a year as an au-pair. A decade later, in 1996, ‘La Caixa’ staged a major exhibition of Blake’s works in Barcelona, which was a marvel to see. Nothing compares to seeing the originals but the Blake Archive (–founded also in 1996 as a joint international project by the Institute for Advanced Technology in the Humanities and now run now by the Carolina Digital Library and Archives–has digitalised practically everything by Blake which the ravages of time have respected. This is a great little miracle, considering that he made and sold very few prints of his major works and that his best-selling work sold about 30 copies.

Browsing through the Archive, I wished I could be free from the onerous task of assessing my students–I would gladly give all of them an A+ if they promised to read the Romantic poems I have selected for study and spend a few hours enjoying online wonders like the Archive. Honestly: how can an exam or any alternative exercise replace the joy of admiring Blake’s work? What can I possibly say that makes a lecture more exciting? I could, naturally, use my classroom time to show a selection of what is in the Archive (or The South Bank Show episode) but public sharing doesn’t work. Somehow, one must be alone to enjoy the feeling of personal discovery; ideally, the teacher’s task should only be pointing out where to find the best resources. On Blake or anyone else.

Some places where Blake is present are obvious (Wikipedia!), others unexpected. Three comments on the YouTube channel offering the documentary named the videogame Devil May Cry 5 as the reason why these persons where interested in Blake. As it turns out, in Capcom’s new release of their popular videogame, just launched this week, there is a new character called V, who is fond of quoting Blake. This is great but no novelty: William Blake often crops up in popular culture. For instance, he is a central element in the first Hannibal Lecter novel by Thomas Harris, Red Dragon (1981), made into a film as Manhunter in 1986 and later again in 2002. Harris’s serial killer (not Lecter but another man) is so obsessed by Blake’s series of watercolour paintings (1805-1810) for the Book of Revelation that he has a tattoo of the red dragon covering his whole back (he even tries to eat Blake’s original). Check on Google images of English actor Ralph Fiennes made up in this way. I wonder what Blake would think!

The South Bank Show episode does not explain why William Blake, an obscure artist few knew in his own time, has become such an ubiquitous presence. In fact, Blake is remembered because of the biography by Alexander Gilchrist, which I have named before. A reference in the Wikipedia page led me to an excellent article by top-rank biographer Richard Holmes, actually a segment of the introduction to the 2004 re-issue of Gilchrist’s work, The Life of William Blake: Pictor Ignotus: “Saving Blake” (The Guardian, ‘Pictor Ignotus’ means ‘unknown painter’ and we must wonder why publisher Macmillan decided to issue a volume about someone who had been largely forgotten by the mid-19th century, with the exception of some keen admirers. Yet, this is how Blake survived into our times.

The story is worth telling, if only briefly. Gilchrist, born one year after Blake’s death, was a trained lawyer but also a budding art critic. He published a biography of minor artist William Etty before embarking in the two projects that articulated the rest of his brief life: his marriage to Anne Burrows and his work on William Blake–whom he discovered accidentally thanks to a second-hand copy of The Book of Job. Gilchrist’s subsequent research passed through interviewing people who had met Blake, and others interested in him, among them the leader of the Pre-Raphaelite movement, Dante Gabriel Rossetti–a collector of Blake’s work. No wonder, since Blake had to appeal necessarily to the neo-Medieval spirit of the Pre-Raphaelite Brotherhood. Gilchrist succeeded in completing his investigation and signing the contract with Macmillan but he died of scarlet fever passed on by his daughter. His distraught wife Anne, a major collaborator in her husband’s work, completed the manuscript, attributing to herself only editorial tasks rather than co-authorship. William Rossetti, Dante’s brother and a major art critic, endorsed the biography, which found a receptive audience. This success started the process of canonization by which Blake eventually became studied both as an artist and a poet, and also his seeping down into popular culture, with the infinite lists of allusions.

Gilchrist’s many sacrifices to rescue Blake from oblivion raise an important issue: would we remember Blake without him? Or would, inevitably, someone else have fallen in love with his artwork and rescue it? How many other obscure artists are waiting to be rescued in similar ways? And how come that the Pictor Ignotus of a time can be the star of a later time? It is usually claimed that this happens because some artists are ahead of their times but in Blake’s case this is a peculiar stance. Blake is perhaps best explained as a belated Old Testament prophet rather than as a modern artist, though it is true that his Romantic pledge to follow his own course rather than the art of his time, and the niche he carved for himself as a unique engraver using his own technique of relief engraving, make him closer to us. He was his own person, and this is something we appreciate. As for his heavily religious writing, we tend to downplay it (and woefully misread it), preferring to enjoy on the whole the mystery of his muscular figures and his alluring, vibrant colours.

Here’s a pocket biography. Blake was the child of a middle-class Soho hosier, attended briefly school as he was a difficult child, and was next home-schooled by his mother. Between ages 10 to 14 he attended drawing school, while he continued his domestic education by reading voraciously (the Bible was a central text for him, also John Milton). At 15 he was apprenticed to engraver James Basire, formally becoming at 21 a professional engraver, even though he was always employed by others, mainly as an illustrator. He married Catherine Boucher in 1782 and the pair enjoyed a happy union for 45 years, only flawed by the birth of a stillborn child and Kate’s subsequent inability to bear children. She was a most valuable collaborator, to the point that Blake trained her as a fellow engraver, caring besides for her husband on the domestic front with no complaint about their poverty. Both worked very hard to turn Blake’s visions and ideas into the illustrated books that transmitted them to posterity (thanks to Gilchrist!). Incidentally, Blake and Kate spent their lives mainly in London, and appear not to have travelled at all (or very little).

Blake had proto-anarchist ideas, which we celebrate today. He defended that individuals should be free to enjoy life without being fettered by any tyrannical Government or Church. According to him, personal evolution should be encouraged, sexuality fully explored, the body respected as a source of perception indivisible from the soul. Because of these tenets we trick ourselves into believing that Blake is of our times, which he was not. The man constantly had, since age 4, visions of God, angels, spirits, the dead and even the Devil–that was the reason why he spent such short time in school. Most contemporaries believed him mad, whereas now we tend to call him depressed or, less gently, schizophrenic. Actually, he had the kind of self-mythologizing imagination that others like J.R.R. Tolkien also possessed with the difference that Blake drew no separation between rationality and his visions. He was not insane at all, just a man comfortable with a kind of mind we now call pathological but that used to be called mystical. Perhaps only Biblical New Agers can truly understand Blake. A New Age approach, however, is not encouraged in our ultra-rational Literary and Cultural Studies.

In many senses, therefore, we profoundly misunderstand Blake. He is, among the artists we insist on calling Romantic, possibly the most resistant to science, having made of Newton his main nemesis. In Newton’s mechanicist universe there is no room for spiritual visions, which have been denied by science since the Enlightenment. As a child of the 18th century, Blake seemingly sides with the writers of Gothic fiction, who claimed there must be something beyond stark reality. The difference is that whereas they imagine evil monsters– frequently explained as illusions rather than actual supernatural occurrences–what Blake imagines is not scary but comforting. He claimed to speak with his dead brother Robert on a daily basis, in the same way widowed Kate later claimed to speak daily to him once dead. Blake is an in-your-face example of a pre-Enlightenment imagination which is fully aware of Enlightenment rational restrictions, in a way that his Medieval predecessors could not be. It was easy to call him madman, but also convenient because accepting that his visions were not a product of disease would be too scary–too Gothic!

Tolkien wrote that although he had been fantasizing about Middle-Earth for as long as he could remember, he had no notion of having invented any element in it: when he wrote he felt as if he was being told what to write. Though a strict pro-establishment Catholic, and not an anti-establishment Dissenter like Blake, Tolkien also turned belief into mythology. I’ll argue, then, that individuals with a strong sense of belief are more prone to accepting the existence of other universes, which rational Enlightenment denied. This may sound like something borrowed from Carl Jung but I truly think that adamantly denying other possible universes is… irrational. I’m not myself a believer in God the patriarch but I do suspect that we live in just one of many possible multiverses, a view many scientists support today in view of what quantum physics is teaching us. We make enormous efforts to convince ourselves of the coherence of our world-view but perhaps individuals like Blake–and the many others after him that tap directly into their imaginations to create the parallel universes we enjoy in fiction–are simply quite at ease with the idea of this numinous elsewhere. We fear monsters as children and are taught to suppress that fear as adults but I always say that seeing an angel would be far scarier than seeing a monster, particularly if you’re not a believer. This is why we need to convince ourselves that Blake was a lunatic, though one whose art is wonderful.

Teaching the basics of any artist’s work is, then, reducing a person to trite, manageable slogans. Once a madman, later Pictor Ignotus, then a Victorian favourite and currently both canon and legend, William Blake reminds us that we cannot condense any living person, and much less an artist, into a matter for two lectures, a Wikipedia entry, a documentary, or a biography–no matter how enthusiastic. Yet, this is how we learn and teach: hurriedly, in little pills, and trusting that one day students will have more time to take pleasure in names like Blake rather than just take credits for a course.

I publish a post every Tuesday (follow @SaraMartinUAB). Comments are very welcome! Download the yearly volumes from: My web:


February 18th, 2019

The volume that interests me today is a novel: No Mean City (1935), ‘the classic novel of the Glasgow slum underworld’ as the cover of the Corgi edition announces. Apparently, this novel has its origins in the short stories written by Gorbals unemployed baker Alexander McArthur. They were polished for publication by journalist H. Kingsley Long, a choice made by the original publisher, Longmans, Green & Co. The middle-class target readership might explain the unique narrative style of No Mean City, which mixes melodramatic, violent action with pseudo-ethnographic comments on working-class life in Glasgow’s most notorious neighbourhood, the Gorbals, between 1921 and 1930. Despite the abundant Lowland Scots dialogue this does not feel like a novel primarily addressed to Scottish people, though it might well be that I’m mistaken and that the patronizing tone adopted is intended to inform anyone outside the Gorbals of its degraded social situation, whether they live in Glasgow, in London or elsewhere.

In essence, the plot concerns the efforts of Razor King Johnny Stark to maintain his reputation among the local gangs by getting involved in a variety of brawls, though the novel also narrates the failure of his brother Peter and of his friend Bobby to climb socially upwards beyond the Gorbals. McArthur and Long portray a lifestyle which is absolutely depressing for there is no way out of the violence, the squalor, the economic insecurity, and the general injustice that keeps these characters tied to their sordid background. The vicious circle depicted is easy to understand: poverty results in working lives that begin too early, with no chance of an education; boys and girls marry young and soon have too many children, which results in poverty like their parents’. Finding decent housing is simply impossible because slum landlords charge outrageous amounts for appalling accommodation–if that’s a word to be used in this context–with unhygienic bathrooms shared by dozens. Ill-health is general. Not even youth offers a respite. With no prospects at all, girls try to catch a husband as soon as possible to leave their exploitative, low-paying jobs and boys try to find in gang violence and heavy drinking the enjoyment which work does not offer. All this is well-known but it is still shocking to find it described in so much detail.

Critics and readers since 1935 have complained, precisely, that the detail is lurid and that the plot veers in the last third of the book towards the sensationalist–and I agree. The novel loses interest and quality the moment the marriage of Stark and Lizzie begins disintegrating and the authors show more interest in how other persons take part in their sex life than in why they live in that miserable way. The sub-plot dealing with Peter narrates how his budding political awareness–stimulated by his voracious but haphazard reading–results in his leading, though unwillingly, a more than justified workers’ protest. This ends up costing him his job and, hence, his chances of accessing the low middle-class. Yet, No Mean City is not at all a political novel, nor a text that seeks to denounce the situation of the characters in any way: it is just a vivid representation of a condition that seems to be impossible to solve; the authors demand no reaction from middle-class readers except curiosity for the human zoo that the Gorbals appears to be. They offer no pity for any of the characters, which is understandable in Johnny’s case but not so much in Peter’s and Bobby’s. Much as it happens in its main successor, Irvine Welsh’s Trainspotting (1993), the main aim seems to épater les bourgeois.

No Mean City came to attention again in 2010, in its 75 anniversary, with some controversy about whether it should be kept alive at all. An interesting article by Dave Graham ( explains that Johnny Stark’s “fondness for slashing his adversaries’ faces with razors” is still a problem today. As local police officer Carnochan warns “If you bring a child up in a war zone, you’ll create a warrior. That’s what we’re doing. I’ve been a cop for 35 years and I can tell you, you can’t arrest your way out of this”. Actually, Glasgow Police and the Town Council authorities have started a quite successful programme to if not eradicate at least to curb down the stabbings that have replaced the slashings (in my time in that city I learned that a ‘Glasgow kiss’ is a knife cut that opens both corners of the mouth…). The authorities are doing something quite simple but effective: have the gang members talk to each other. Most boys simply do not know why they are perpetuating a type of patriarchal masculinity that only finds satisfaction in hurting other equally disempowered young men, and women, and talking seems a good way to start deconstructing it.

Two issues caught my attention in particular when reading No Mean City. One is the ambition for ‘reflected glory’ that leads women like Lizzie to encourage men like Johnny onto their violent path, regardless of the dangerous consequences. The women were (and are) mostly the victims of Johnny and his ilk and, indeed, in the novel they are beaten and raped as he pleases. Yet, they are loyal, though it is also true that only as long as it is convenient. To my surprise, Johnny’s mistresses, even his wife, take other lovers without concealing this from him; Stark is so certain that his reputation will attract other girls that he does not care for any in particular (except briefly for Lizzie). The women may be disempowered in this patriarchal regime but the authors remind us that they have some domestic power derived from the unstable economy: the men are often unemployed and depend on their women; they feel, however, no qualms to let the girls pay for drink, entertainment, or household expenses–even for their upkeep. There is a kind of equality combined with inequality, though it is also evident that the couples’ social standing depends on the husband, which is why the wives are constantly judging (to their face) whether they are ‘manly’ enough to get better jobs.

The other issue is ‘the impossibility of imagining something better’. There is an obsession in No Mean City for specifying how much money each character earns at each job they take, accompanied by frequent comments on how being on the dole often pays better than taking the worst jobs. The working classes are presented as tremendous snobs that classify neighbours depending on their unkempt looks and clothes with more precision than any middle or upper-class person might use. At the same time, the jobs the characters aspire to are a limited selection–the best-paid men are Lizzie’s lover, a foreman at the bakery where they work, and Peter’s father-in-law, an usher at a ‘kinema’. Bobby manages to earn quite a nice amount of money as a professional dancer, partnering with his girlfriend and later wife Lily but their private lessons also include sexual services for the richer patrons. Not only upper middle-class professions, such as medicine or the law, are totally absent from the horizon of the Gorbals’ people but also the professions by which many working-class individuals have improved their lot: primary school teaching or nursing for women, office work or specialized positions as mechanics for men, among others. Blue-collar life is not even guaranteed in the Gorbals, though it is obvious that those with higher aspirations (mainly Peter) are trying to copy more affluent neighbours. Of course, those with no chance of upgrading their lives hate the better off workers.

School is never mentioned, either, in No Mean City and this seems to be a glaring absence. Social upward mobility was (and is) encouraged in working-class schools by teachers: to begin with, theirs might be the first and only example of employment based primarily on mental work that working-class children ever see. Things have changed very much since the 1920s of this novel but I can tell first hand that contact with teachers, particularly those in possession of university degrees, is primordial in awakening the imagination of the less privileged children to social mobility. In middle-class families this is very different: children are surrounded by relatives with socially respectable jobs and the family’s income allows them to take a higher education for granted. This does not mean that middle-class children do not face any battles but it means that they needn’t face some battles. There is an abyss between a child who wishes to be, say, a lawyer in a middle-class family and who perhaps has lawyers in the family and a child with the same vocation whose parents are constantly in and out of employment (even always out) and who, besides, knows no one with a university degree.

Obviously, primary and secondary school teachers are also often the ones to help children see beyond their family’s horizon of expectations, suggesting specific professional training, further education and even careers. Apart from them, working-class kids with a wish to pull themselves up by their bootstraps started dreaming of a chance to leave the Gorbals–or their local equivalent–thanks to each new 20th century media. Movies, the popular press, radio, television, the internet, etc… have made the representation of desirable middle-class lives constant in the cultural panorama of the working classes. Some may bemoan that the glorification of the middle classes has destroyed working-class culture but, as the authors of No Mean City claim, the truth is that, given the choice, workers prefer being middle-class. This is what Marx and Communism, generally, woefully misunderstood.

What I’m saying is that, though this may sound trite, No Mean City has taught me again a lesson I had forgotten: you need imagination to leave a working-class background behind, and this must be awakened somehow. We take it for granted that social upward mobility is there for the taking but it is not and though consumerism seems to fulfil the function of stimulating an urge for what the Victorians adored, namely self-improvement, this is still very limited. I’m well aware that in 2019 we are at the end of an attempt to allow the working classes to change their prospects thanks to the welfare state. Even the children who got university degrees are unemployed or find only bad-quality employment, while the children of the upper classes continue enjoying privileges based on their families’ networking as the middle classes are destroyed. I hear no one, however, truly discuss how social mobility works, if at all, and there is total silence about children born to affluent parents who end up being working-class by income, if not by background. There is much talk about how the current generation will have a worse life than their parents but the issue is not addressed from the point of view of how much actual upward mobility there has been, in Scotland or anywhere else.

If you ask me, I’d say that very little–the upper classes have noticed that too many working-class individuals have dared imagine a better future through education provided by the welfare state. The way to limit those dreams is by a) cutting funding for public education; b) putting as many obstacles as possible in the way of publicly-educated persons; c) forcing families to spend so much on housing that nothing is left for improving the chances for their children. And d) pretend anyone can become an overnight instant success on YouTube, Instagram, etc while allowing 1% of the population to enjoy 99% of all wealth on Earth.

Imagining a better future might not be enough but it’s a beginning.

I publish a post every Tuesday (follow @SaraMartinUAB). Comments are very welcome! Download the yearly volumes from: My web:


February 11th, 2019

I’m in the middle of reading Jon Savage’s Teenager (2007), a study of how youth was socially constructed between 1875 and 1945 in the USA, the UK, and some other European countries. We usually assume that ‘teenager’ appeared in Western culture in the 1950s but the first thing Savage’s volume teaches is that this word actually started being used in 1944, in the USA, as a sort of harbinger of what youth would be like after an Allied victory in WWII: a time to enjoy yourself, and all the new pleasures of total consumerism, no matter what class you belong to. I remain, in any case, puzzled and amused by how the English -teen suffix was used to create the age category 13-19 quite artificially. Today there is talk of ‘teens’ and ‘tweens’ (tweenager!) for 10-14 kids, though I’m not sure to which category the 20-29 young adults belong to anymore. I don’t hear the word ‘adultescent’ so often these days, perhaps because now everyone seems to be a ‘millennial’ up to 35 (and young up to 40!).

I would say that in Spain we use predominantly ‘pre-adolescente’ (10-14) and ‘adolescente’ up to 21, which agrees more or less with the old legal majority (this was changed down to 18 in 1978). As happens, the word ‘adolescence’ is also an American creation: the contribution to our essential vocabulary of psychologist G. Stanley Hall (1844-1924), a staunch admirer of Sigmund Freud. His book of 1904 (vol. 2, 1907), Adolescence: Its Psychology, and its Relation to Physiology, Anthropology, Sociology, Sex, Crime and Religion is the first instance of the use of this key word.

Before the invention of adolescence, Savage explains, childhood just ended in adult age, around 18, when youth began (the Victorians did use the label ‘young adult’, now used for different purposes in YA fiction). For Hall, childhood ended, rather, at 13-14, with puberty, and adolescence between 21-25 (presumably when you were ready to marry). One thing that bothers me is that although Latin adolēscēns means ‘growing up, maturing’–hence its use by Hall to define the transitional period from childhood to adulthood–it also meant originally ‘lacking’, which is where the Spanish verb adolecer comes from. The RAE dictionary warns that this verb means “having some sort of defect or suffering from some malady” and not “lacking” but the point I’m making is still valid: an ‘adolescent’ is, whether Hall intended it or not, an individual missing an indefinite something–arguably maturity. I’ve never really liked the word for that reason: it seems awfully patronizing to me. Even ageist, in current parlance.

It must be recalled that childhood is actually a late 18th century invention, fully established in the Romantic period (or Regency period, if you prefer it) and that, of course, the cult of youth is a product of the same era. Before that time, basically the ages between 0 and 17 were seen as a long preparation for adulthood, which could start as early as 10 (or earlier) for working-class children employed full time, apprenticed in some cases already at 7. In the early 19th century adulthood, then, was assumed to begin as soon as an individual entered the marriage market: around 16 for the girls and 20 for the boys. Naturally, the possibility to enjoy childhood and youth would depend on each family’s income–in upper-class families, the girls would also be considered marriageable adults by 16 but the men enjoyed a far more prolonged youth, including a university education, travelling and perhaps professional training (in business, the law, the military, or politics) up to the age of 30.

From my constant repetition of the word ‘marriage’ and similar, you might get the impression that weddings used to be the main rite of passage into adulthood–or a specific age barrier in their absence: if, as a woman, you were not married by 30, and, as a man, you were still single by 40, then you became officially a spinster or a bachelor, that is to say, a celibate adult. But I digress. Actually, the factor that introduced all the changes in the way age is socially constructed is education.

It seems quite clear from Savage’s comments that childhood was invented when the need for a prolonged primary education was understood (first by upper-class families, eventually by the British state in the 1870s). Likewise, the invention of adolescence is a by-product of the American high school system. Obviously, the biological changes leading to puberty have been a constant in the life of Homo Sapiens for many thousands of years but how each culture reads them varies enormously. In American culture, puberty started to overlap with secondary education at the turn of the 19th century into the 20th and, so, Hall could come up with the idea that, in essence, an adolescent is someone being educated beyond primary school, and up to college graduation (even MA level).

Besides education, pleasure took centre stage. The four decades between 1904 and 1944 gradually established a new understanding of youth, based on a sense of entitlement to pleasure (for boys and girls), beginning with the upper classes. Young people were socially powerless, which is why (mostly the men) had to go through the generational massacre that was WWI; they reacted against this appalling patriarchal abuse by getting rid of their late Victorian and Edwardian shackles. I still marvel that couples courted up to the early 1920s in the presence of a chaperon or that parents could choose dates for their daughters when the concept was invented in America. We are not fully aware of what the 1920s supposed in terms of a youth revolution which was possibly deeper in many senses than the 1960s by comparison with what came before, though, of course, limited to a social elite. The post-1929 Depression decade of the 1930s seems sedate and conventional by comparison. I need not explain what WWII did to the young all over the world, specially the men.

The novelty of the late 1940s to mid 1950s is that the new ‘teenager’ could be found in any social class, whereas it seems to me that the adolescent is, in contrast, a middle- and upper-class figure. To be an adolescent you need a certain educated sensitivity and leisure to ponder in true post-Romantic fashion the unfairness of life and of the adults around you. If you’re young but busy working eight to ten hours a day, you may still possess that sensitivity but far less time to engage in self-centred adolescent thinking. What you do is reinvent the concept of leisure and transform it into the time when you enjoy your hard-earned wages, either in imitation of what richer kids do or generating your own working-class version of fun, quickly catered to by the entertainment industry. Hence, the teenager.

Savage hardly ever takes into account how different youth and adolescence has always been for boys and girls–this is my main complaint against his book. Yet, apart from the constant difficulties to fix age boundaries for each period of life since the late 19th century, Savage highlights a recurrent problem: society’s inability to control unruly young men, particularly of working-class background, whether they’re called teenagers or adolescents. Many complaints against the gangs of uncontrolled, second-generation, Irish or Italian youths in early 20th century America are a dead ringer for similar fears of non-white gangs in Britain now in the 21st century. This connects with my previous post about Dick Hobbs’ Lush Life: Constructing Organized Crime in the UK, a book in which he presented working-class male youth as a phase of unruliness before the acceptance of adulthood set in. Or boys will be boys, and the rest of us must put up with them, beginning with girls their own age.

What tends to be forgotten in most studies of youth is that the idea of youthful rebellion is specifically masculine: the late 18th century and early 19th century was a time of intense masculine revolt against patriarchy, in the most traditional sense of the rule of the father. The French Revolution of 1789 and the Napoleonic Civil Code of 1804 resulted in new legislation that, while still binding women closely to their male legal tutors (father, husband or even son), allowed young men much more leeway than in the past. Fathers used to have total authority over sons, including matters of career choice or even marriage. Young men of the Romantic period and later steadily eroded that authority at the cost of eventually having to accept a loss of their own patriarchal authority when they became fathers. This seems on the whole positive but has an underside.

In short: the unruliness of young men is the collective price we are paying for diminishing the total authority of the patriarchal father. Western society has failed to find a better replacement for that authority–or, found it but lost it. Gentlemanliness worked for a while as a desirable way of having young men stick to a positive masculine ideal that did not undermine their personal autonomy; yet, it was lost in WWI, and we don’t know how to appeal to unruly young men on the basis of principles that instil respect for others. Hence, the cycle of recurrent youth violence which Savage (and Hobbs) describes: the adult men who have become fathers after going themselves through a violent youth lack the authority to restrain their unruly sons–in the worst cases, they have never matured, do not participate in their sons’ education, or even celebrate the boys’ misbehaviour. This is why, I insist, we need to see adolescence and the teenager as heavily gendered social constructions, paying specific attention to how and why youth rebellion becomes anti-social criminality.

Youth, then, changed around the beginning of the 20th century to be re-invented by Hall, on the basis of the Romantic cult of youth, as adolescence–a time for personal introspection and the construction of the self in opposition to parents. It became next, Savage explains, beginning in the 1920s and culminating in the 1950s, a time for hedonism and the rise of the teenager. This was followed eventually in the 1960s and 1970s by sexual liberation. It seemed, then, with fourth-wave feminism demanding total equality, that the 1990s would be the beginning of the best of worlds for youth. Yet, the stories we tell in the 21st are either the sugary nonsense of John Green and company, or grim tales connected with social network horrors (do see Aneesh Chaganty and Sev Ohanian’s visually amazing film Searching… )

Perhaps adolescence and the teenager are no longer useful to understand how the young live and it is urgent to hear what they have to say about themselves. We just can’t wait to read about them in the History books written in the second half of the 21st century.

I publish a post every Tuesday (follow @SaraMartinUAB). Comments are very welcome! Download the yearly volumes from: My web:


February 4th, 2019

I will soon start teaching Mary Shelley’s Frankenstein and although the best time to revisit this classic was last year–the bicentennial anniversary of its original publication–2019 is also a good moment to re-read it, for it is the year when Ridley Scott set his masterpiece, Blade Runner (1982). Both novel and film are closely connected, since Blade Runner, though based on Philip K. Dick’s bizarre SF novel Do Androids Dream of Electric Sheep? (1969) is one of the myriad texts descended from Frankenstein. Mary Shelley was the first to ask, in earnest, ‘what if science could generate powerful monsters that could escape human control?’ and this is a question that frames Dick’s and Scott’s work. And our year 2019.

I have recently reviewed an article by a young researcher in which I found some confusion regarding the use of the concepts ‘post-human’ and ‘cyborg’, and I’ll use Frankenstein to clarify them, and then to proceed with some comments. Before I forget: I’m using the Oxford World’s Classic edition (the 2008 reprint) with my students but I was aghast to see that the prologue and the bibliography are the work of one Prof. M.K. Joseph who died in 1981. I immediately e-mailed the Literature editor at Oxford UP to suggest that they commission a new introduction by someone who truly understands how Mary Shelley’s mistresspiece connects with current, urgent issues, and, generally, with our science-fictional present. We’ll see if they answer.

Brian Aldiss famously celebrated in Billion Year Spree (1973) Mary Shelley as the mother of science fiction, stressing in passing that the Gothic narrative mode is one of the foundations of sf, at least of its more technophobic branch. Re-reading the novel now, at the beginning of 2019, and possibly for the fifth or sixth time (I lose track), a few things strike me as singular. One is that Mary’s tale is a frontal attack against male ambition but not necessarily a feminist text; the other is that she understood long before we had a name for it, what the post-human is.

The feminist question is obvious enough: Victor’s horrific ordeal is framed by the letters that explorer Robert Walton sends to his sister Margaret so that we see how useless men’s pursuit of glory, honour and fame is. The alternative lifestyle which Mary recommends is, nevertheless, one of sedate domesticity, in which women occupy a traditional position as dutiful, pre-Victorian angels in the house.

Margaret, the addressee of the letters by Captain Walton that frame Victor’s and the monster’s testimonials, stands for married bliss in safety and domesticity. So does Elizabeth Lavenza, Victor’s adoptive sister, and doomed wife as the monster’s victim; as such, she is the embodiment of the dangers that men bring into the peace of the hearth but also of total submission. Mary, the daughter of Mary Wollstonecraft, the woman who wrote A Vindication of the Rights of Women (1792), among which she placed education in a central position, never mentions Elizabeth’s right to attend university, as Victor and his friend Henry do. She is raised to be Victor’s wife and no event in the awful tragedy that unfolds diverts her from this path, even though she could have been much better company for Victor if only she had some inkling of his overambitious scientific pursuits. Mary Shelley simply offers no critique of the patriarchal script written for Elizabeth by his adoptive parents and by Victor himself, even though the author is adamant that there is something very wrong in men’s extra-domestic pursuit of glory and, using Barbara Ehrenright’s phrase, their ‘flight from commitment’.

I partly agree with Mary’s critique of the male sacrifice of domesticity–possibly what she endured as Percy Shelley’s wife–because it is often based on total selfishness. At the same time, I fail to see in which ways the world would be a better place if the many self-driven individuals (mostly men but also many women) had limited themselves to raising families. There must be a middle ground.

Reading David Grann’s excellent non-fiction account of British explorer Percy Fawcett’s suicidal search for the lost City of Z (the title of the book), I often thought that male wanderlust must be evidence of ingrained insanity. Yet, so many women also feel the drive to fulfil their ambitions even against all reason that it cannot simply be a matter of gender but something else that makes domesticity secondary. Why someone with small, dependent children would volunteer to travel to Mars, and possibly never return, baffles me, not so much because of the need to fulfil the dream but because of the aspiration to combine ambition and family. This is not, of course, Walton’s and Frankenstein’s situation, and perhaps what Mary Shelley was saying is that excessive ambition is incompatible with family life, and even with life. But, is this right? If she was imagining some low-key, pastoral idyll, as an alternative, she does not explain. At the same time, most often the likes of Victor are managing to create man-made horrors while keeping jobs and family well balanced, a possibility Mary does not contemplate, believing as she does that scientific discovery is a kind of youthful brain fever that overtakes everything else in the single individual’s life. Again: there must be a middle-ground.

How about the cyborg and the post-human? The monster that Victor creates is NOT a cyborg, for a cyborg is a creature, or person, whose body combines organic and inorganic materials. Donna Haraway had read sufficient science fiction when she wrote her famous 1985 tract ‘A Manifesto for Cyborgs’ to understand this, but it seems to me that very often students and scholars who use the word cyborg do not really know what they’re talking about, and simply assume that the word refers to any artificial creation.

Victor’s monster is artificial because he is not woman-born but he is 100% organic. Frankenstein discovers first the principle of life, ‘the capacity of bestowing animation’, and decides next to build a superhuman body–if that body is functional, then he will apply himself to re-animating ordinary human corpses. Since preparing ‘a frame’ is difficult because of ‘its intricacies of fibres, muscles, and veins’ he decides to work at a larger scale: ‘As the minuteness of the parts formed a great hindrance to my speed, I resolved, contrary to my first intention, to make the being of a gigantic stature, that is to say, about eight feet [2.40 m] in height, and proportionably large’. Mary wrote before DNA was known, and before the first transplant of a human organ was ever attempted, and we need to read this part of Victor’s research as a necessarily preposterous tale; yet, the main point is that he is not using magic but science.

Once the creature is made–and in its manufacture 20-year-old Victor is amazingly successful–Frankenstein is appalled to see that he is an ugly thing: ‘His limbs were in proportion, and I had selected his features as beautiful. Beautiful! Great God! His yellow skin scarcely covered the work of muscles and arteries beneath; his hair was of a lustrous black, and flowing; his teeth of a pearly whiteness; but these luxuriances only formed a more horrid contrast with his watery eyes, that seemed almost of the same colour as the dun-white sockets in which they were set, his shrivelled complexion and straight black lips’. Nobody has really managed to give an accurate pictorial representation of the monster, who does not look at all like the bolts-and-nuts version of Boris Karloff. Yet, I always say that Victor’s problem is that while he is a great anatomist and a wonderful surgeon, he is a disaster as an artist. A failure, if you wish, as a plastic surgeon. Had be been able to combine the features selected harmoniously, we would have a very different tale of celebrity, as everyone admires a beautiful being. As for his being a giant, well, being 7 feet tall is the foundation of Pau Gasol’s celebrity… The monster would be a highly valuable basketball player today!

Something that I missed in previous readings is how often the monster refers to ordinary human beings as another species, and also to himself. I am always correcting my students when they refer to the human race for we are a species (Homo Sapiens) and not a race, and I was surprised to see that the monster is well aware of this crucial difference. The name Homo Sapiens was coined by Carl Linnaeus in 1758 but this was long before any thought of evolution was contemplated by Charles Darwin (1809-1882); many have commented on Mary’s allusion to Darwin’s grandfather, Erasmus (1731-1802) as the scientist whose discoveries in connection to electricity may have inspired Frankenstein’s use of an engine to ignite the spark of life. Yet, to me, the monster’s awareness of species difference is far more exciting.

When he demands en Eve from his maker, the creature argues: ‘I am alone and miserable; man will not associate with me; but one as deformed and horrible as myself would not deny herself to me. My companion must be of the same species and have the same defects. This being you must create’ (my italics). Of course, I’m cheating a little bit, for Mary mixes ‘species’ and ‘race’ indiscriminately and, thus, Victor decides to destroy the female creature he is working on afraid that ‘a race of devils would be propagated upon the earth who might make the very existence of the species of man a condition precarious and full of terror’. He is horrified to see himself as the ‘pest, whose selfishness had not hesitated to buy its own peace at the price, perhaps, of the existence of the whole human race’. My point, though, is equally valid: Frankenstein is the earliest text to posit the possible replacement of Homo Sapiens with a man-made superior human species, that is to say, with a post-human species.

The difference between the cyborg and the post-human is, then, easy enough to understand: the cyborg has inorganic material in their body and cannot pass on any modification of this kind to their offspring; in contrast, the post-human is a different human species that will breed other individuals of the same species, and might wipe out Homo Sapiens if competing for the same environmental resources. As the Neanderthal disappeared, so might we, with the difference that this might happen out of our own mad shattering of the frontiers of science, if we go just one step too far and modify the human genome. Of course, neither Mary nor Victor knew about all this, but their ignorance is irrelevant (also an anachronism): the monster is a monster because we are terrified of the possibility that other humans might push us out. Victor, it must be recalled, manufactures not just someone who is big but also someone who is strong, extremely resistant to heat and cold, with an enhanced muscular capacity and, in short, far better equipped than Homo Sapiens to live on a radically post-human Earth.

The other novel I am teaching this semester is Jane Austen’s Pride and Prejudice (1813), published five years before Frankenstein. Indeed, Austen died in 1817, while Mary Shelley was busy writing her novel, as a young mother of the boy William. I never cease to be amazed that English Literature could accommodate in the same period styles in fabulation so thoroughly different. And I wonder what would have happened if Elizabeth Bennet instead of Elizabeth Lavenza had fallen in love with Victor Frankenstein, rather than Fitzwilliam Darcy. Or if Darcy had kept a secret lab at Pemberley. Possibly, some kind of literary short-circuit!

How lucky we are, then, that we can enjoy both Mary Shelley and Jane Austen.

I publish a post every Tuesday (follow @SaraMartinUAB). Comments are very welcome! Download the yearly volumes from: My web: