March 18th, 2019

In a hilarious moment of the two-part documentary The Scandalous Adventures of Lord Byron (2009) presenter Rupert Everett discusses with Donatella Versace–as they wait for her butler to announce dinner at her own luxury Milan home–whether Byron (1788-1824) was really as handsome as so many contemporaneous testimonials claim. At this point, Everett has already seen diverse portraits and has even donned the same Albanian dress that Byron wears in the famous painting by Thomas Phillips, now at the National Portrait Gallery. Seeing handsome Everett look rather ridiculous in it, the spectator might conclude that Byron was indeed a man of good looks and even better poise. Also, a man who controlled each portrait that was made of him as we control our image in our Instagram accounts. He wanted specifically to look manly, a man of action and not a poet, as Everett notes, and also disguise a limp caused in childhood by polio.

Everett and Versace note that notions of beauty were very different in the early 19th century, suggesting that Byron’s physical appearance would not seem so extraordinary today. I find this quite tantalizing! Everett quips that, on the other hand, Byron must have looked stunning at a time when having all your teeth while still young was not common. At a later point in this second episode the tone changes and becomes a bit less flippant. Rather subtly, Everett’s comments start defending the view that by the time when Byron died, aged only 36, he was past his prime. The infection that killed him was an accident of life, perhaps one preventable, but the documentary hints that Byron’s choice of malaria-infested Missolonghi as his home in Greece was somehow suicidal. It is implied in short that had Byron lived on his life would have been a sad, gradual fall into physical decadence. This is, at the same time, part of the Byron myth: live fast, die soon, and conquer eternal fame. I’m not sure about leaving a beautiful body to bury.

In life, Byron enjoyed fame but he was mostly beset by celebrity and by notoriety–and, of course, scandal. It is fit that Everett, an openly gay man with a pansexual past, presents Byron’s biography, for George Gordon (this is his actual name) was a product of the sexual prejudice of his time or, rather, of its hypocrisy. Just as it seems impossible to discuss Coleridge without mentioning his drug addiction it seems impossible to discuss Byron without alluding to his sexual adventurousness. Likewise, whereas no biographical sketch of Wordsworth is complete without his sister Dorothy, no portrait of Byron can be offered without associating him with his half-sister Augusta Leigh (his father’s daughter by a first wife).

Byron might scream to high heaven that they did not commit incest and that Augusta’s third child Medora was her husband’s and not his but we would still doubt his word, for that is what celebrity and scandal are about: constructing people as we want them to be, not as they are. With lights and shadows: incest may be too much even for us but the pansexual man Everett describes is more to our taste. Funnily, as we dismantle the sexual prejudices of Byron’s time (serious enough to land you in jail for sodomy), we have started criticizing the man for not being handsome enough, and even for being at times in his life rather overweight. Duncan Wu, in particular, offers an image of an effeminate, flabby, shortish, stout Byron totally at odds with the connotations that the word ‘handsome’ awakes in our minds.

Byron was an aristocrat and though not an extremely rich man (he lived on borrowed money, mostly, like most of his class), he led a life of ease and luxury that seems to belong in the 18th century rather than the early 19th. He may be celebrated as a great national hero in Albania and Greece but his mildly Whig politics in defence of nationalism (and even at one point of the anti-Industrial Revolution luddites) are not based on very strong beliefs. It seems, rather, than in a world in which nobody cared for anyone beyond the national borders, Byron’s curiosity and personal presence in remote lands was in itself welcome as a heroic act. His contribution to the independence of Greece was, at best, very marginal and he seems to have been seen during his time at Missolonghi in the early 1820s as just a rich English lord that could be easily milked for his money, if you excuse the expression. He did not die a hero’s death in battle as one might expect from all the exaltation but simply write verse that vaguely endorsed the right of Greece to be a free nation again, on the strength of what it used to be in the classical past. He died, as I have noted, of a fever variously attributed to an imprudent ride in the rain or a bug caught from his pet dog.

If abroad he was a hero, at home Byron was a celebrity of the kind that the Daily Mirror enjoys praising and demolishing in equal parts today. And this what happened to this man: he found himself suddenly famous, as he wrote, after the immense success that the first two cantos of Childe Harold (1812) were, only to be completely ostracized just four years later. In 1816 Byron had to leave England for ever following the scandal of his separation from his wife Annabella because of the rumours about incest with Augusta. Byron was probably one of the worst husbands on record and the separation makes complete sense: his wife, whom he had married for her money as his father had married his two wives, just could not endure the constant humiliation of Byron’s active extramarital life. What is hypocritical is the scandal. Byron often claimed that he had never seduced any woman because he didn’t have to: basically, the women of the Regency period that chased him were the first groupies in literary history, and no wonder, since Byron has often been compared to a rock star. One of the harassers, Lady Caroline Lamb, defined Byron as ‘bad, mad, and dangerous to know’ but probably this is who she, not him, was.

The good looks, the hectic search for sexual pleasure, the journeys to distant lands, the scandalous married life, the more than likely homosexuality and the incest with Augusta… all these are sufficient not for one but for several celebrities. What makes Byron a radically different celebrity from those plaguing our time is that his fame was based on his poetry, for which he did work much harder than he pretended. The sales of his work from Childe Harold onward were in the first years high enough to push best-selling poet Walter Scott out of the market, to the point that Scott became a novelist (though he published anonymously his early novels as if ashamed that they were a second-rank, mercenary product). Byron was particularly well-known because of his narrative verse and he continued enjoying that success even after he had been socially ostracized, from his exile in Switzerland, Italy and finally Greece. To understand how relatively lucky he was, we need to think of the far more tragic fate of Oscar Wilde, a man as flamboyant and sexually curious as Byron but who could not escape, as Byron did, the harsh action of British homophobic legislation. Wilde’s exile in the late 1890s was a much sadder story indeed but, then, he was no aristocrat.

Byron’s main cultural legacy, beyond his poetry and even beyond Literature, is the Byronic hero, a construction that was appended to his own person by his readers whether he wanted it or not. We cannot know what Byron was really like but just as his looks his personality also elicit doubts. He insisted for years that he was not Harold, the character that first expresses the Byronic temper which other male characters inherited–restless, moody, pessimistic, curious about people yet a loner, interested in pleasure but little capable of sustained love. Yet, Byron eventually gave in and granted that in many ways the Childe’s pilgrimage was his own, and Harold a thin mask for himself. Indeed, Byron is all over his poetry, also as Manfred and Don Juan and most of his main male characters, but this is not at all singular. Look at how Wordsworth mined his own youth for The Prelude. I see the appeal of the Romantic construct and why the Byronic hero soon surfaced in many other narratives (mainly novels and plays) giving us Heathcliff, but also Dracula, and even Christian Grey. What puzzles me is what kind of audience Byron had and how they could follow him at all.

I have just finished reading Childe Harold, the four cantos, and I’m not sure how to describe the experience. Last week I told my students that Romantic poetry was published in its time with no footnotes and that the original readers did not expect a critic to decode the meaning, or any obscure passages, for them. We had read the passages in Lyrical Ballads by Wordsworth introducing some of his poems but they were aimed at describing the circumstances that inspired each poem, not the poem itself. Likewise regarding Coleridge and “The Rime of the Ancient Mariner”. We listened in class to Ian McKellen’s beautiful reading of this long narrative poem (about 30 minutes) and though I stopped now and then to make sure students could follow the plot, in general the text was well understood. I’m not in favour of that kind of teaching that turns reading poetry into a forensic exercise, of which you can find plenty on YouTube (a lot from India, for whatever reason!) and I’d much rather my students enjoy the poems they should know about. With Byron, however, I simply don’t know what to do. The booklet we are using includes all of Manfred and Don Juan’s first canto and not Childe Harold but even so the point is the same one: Byron’s poetry is just too obscure for us today, here and in my second-language, second-year classroom.

I did try to read Childe Harold without checking Byron’s own lengthy notes (mostly on points of History, always showing an amazing erudition) or the notes of his editor, which also included notes to Byron’s notes!! It was just impossible: it was like reading through glasses that would suddenly cloud and blind me, but also suddenly disappear altogether, a veritable rollercoaster. Thankfully, Rupert Everett’s documentary follows the journeys by Byron reflected in this long poem and I could make sense more or less of where Harold was at given points but without that aid (and the notes) I would have been quite lost. To my surprise, even though I expected a very intimate portrait of the Byronic hero to connect the diverse observations of the pilgrim, I found the stanzas oddly detached except in the few passages (mainly in canto four) where Harold bemoans his fame and wonders what it will be like once he dies. I positively missed Wordsworth, whom Byron very much disliked, in the stanzas about the landscapes and even the cities. And I had a really tough time understanding allusions to personalities of the 1810s even with the editor’s excellent notes. There was also the problem of when to read the notes, for they constantly interrupted the flow of the lines. I eventually settled on reading them after each stanza. When I came across six stanzas without notes, it felt like being on a sailing ship with a full gale.

Reading the negative comments on Walter Scott’s first novel, Waverley (1814), I came across a disgruntled reader who, hating this pompous piece of fiction as much as I do, proposes that we ‘decanonize’ Scott. I think that we are already in the process of decanonizing Scott, who has not been included in our second-year 19th century courses here at UAB since at least 1994. Preparing the lectures on Byron I realised that I wasn’t even sure when to tell my students about Scott: now, commenting on his poetry together with Byron’s, or later when we teach Jane Austen. It is very clear to me that an English graduate must know who Scott was but I would not include one of his novels in the syllabus, for that would probably alienate rather than interest students. What I fear is that we have reached the same point with Byron: students must know who he was and what he did, but can they read his poems at all? Perhaps the lyrical pieces like ‘She Walks in Beauty’ but this hardly gives a glimpse of the giant he was.

Arguably Byron (and Scott) are a case not so much of decanonization but of increasingly difficult readability. It’s not the same. Robert Southey may be canonical but we just do not include him in our syllabi, either his poetry or his person, whereas, I insist, knowing about Byron and Scott is essential. This is a typical conundrum for all teachers: how should we teach? On the basis of literary archaeology or on the basis of accessibility? It used to be the former in the ancient times when philology reigned but the more pragmatic current approach tells me that Byron is approaching if not total at least partial decanonization.

I’m not sure that I’m sorry… but that must be my class (and gender) prejudice against privileged male aristocrats, no matter how handsome.

I publish a post every Tuesday (follow @SaraMartinUAB). Comments are very welcome! Download the yearly volumes from: http://ddd.uab.cat/record/116328. My web: http://gent.uab.cat/saramartinalegre/


March 11th, 2019

It has become commonplace to see Samuel Taylor Coleridge (1772-1834) through the lens of his drug addiction, which is why it is perhaps quite wrong to begin this post in this way. His case, however, must be contextualized and his addiction treated as an ailment similar to that currently killing 130 Americans every day and plaguing hundreds of thousands more (see https://www.drugabuse.gov/drugs-abuse/opioids/opioid-overdose-crisis). With an important difference: whereas in Coleridge’s time the addiction to opium, and mainly to its derivate laudanum, was poorly understood in the 21st century our experience of drug abuse is already very extensive. This did not prevent greedy pharmas in the 1990s from flooding the market with potent analgesics said no have no side effects while they fooled the corresponding Government agencies.

Coleridge, like most current victims of the American opioid overdose crisis, suffered from chronic pain (connected with rheumatism) and simply needed relief. He most emphatically did not take drugs for recreation and if he had any visions attached to their use, this was not the outcome of any experiment–it was a side effect. Trying to make his body more comfortable Coleridge fell into a downward spiral of drug abuse that even his closest friends misread as vice. Wordsworth broke his long friendship with Coleridge for that reason (though they later reconciled) and if we have such vast textual production from him this is only because one Dr. Gillman took pity on his unfairly abhorred patient. This man and his family provided Coleridge with a home at their Highgate residence in London between 1816 and 1834, helping their illustrious guest to control his addiction as far as possible and allowing his mind to shine free from that burden (at least temporarily) to write, among others, his Biographia Literaria.

A constant in Coleridge’s life is an insatiable craving for knowledge. His father was an Anglican reverend but also the headmaster of the local King’s School at Ottery St Mary’s, Samuel’s birthplace in Devon. From Coleridge’s remembrance of his early childhood as a constant stream of reading, we may deduce that the father encouraged this activity. Reverend Coleridge died when Samuel was 8 and the boy, the youngest of ten siblings from two marriages, was sent to boarding school, at Christ’s Hospital (in London), an experience he did not relish in general. With one important exception, recalled in Biographia Literaria: in that school he “enjoyed the inestimable advantage of a very sensible, though at the same time, a very severe master, the Reverend James Bowyer”.

This man not only gave his young students a formidable education in the classics–combining them with Milton and Shakespeare–but was also an adamant editor of his pupils’ written work, teaching them to aim at precision. As Coleridge recalls, “he showed no mercy to phrase, metaphor, or image, unsupported by a sound sense, or where the same sense might have been conveyed with equal force and dignity in plainer words”. Bowyer did not take half measures: if two faults were found, “the exercise was torn up, and another on the same subject to be produced, in addition to the tasks of the day”. Coleridge still had in adult age nightmares about this man’s severity but he acknowledged his “moral and intellectual obligations” to him. He and his classmates, Coleridge adds, reached University as “excellent Latin and Greek scholars, and tolerable Hebraists”, though this was “the least of the good gifts, which we derived from his zealous and conscientious tutorage”. Reverend Bowyer, though not the kind of teacher we celebrate today, gave his brilliant student Samuel the foundations he needed for his extremely rich intellectual life.

Not all went well at Cambridge for Coleridge, for he never got a degree. Besides, he wasted one year of his youth in the King’s Light Dragoons, a regiment where he secretly enlisted as ‘Silas Tomkyn Comberbache’. He was discharged by reason of insanity (as the regiment papers attest), though other sources note that he was just the most inept soldier ever. Others claim that his brothers rescued Samuel from a personal crisis possibly provoked by an amorous disappointment when one Mary Evans rejected him.

Biographer Richard Holmes explains that Coleridge had many talents but he was above all a fascinating talker. Also, a rambling one, which means that his listeners were often amazed but also confused by the fast flow of his ideas. Coleridge was unable to write them down as they left his mouth and, besides, his manuscripts are known to contain many borrowed ideas he did not acknowledge or, in plain words, many plagiarisms. In any case, whereas Wordsworth’s main talent was as a poet, Coleridge was a much vaster intellect.

To my surprise, he was for a while an itinerant Unitarian preacher and seems to have regarded himself mainly as a theologian, though this is by no means how we think of him today. He was a philosopher deeply influenced by German idealism (which he imported into Britain), a psychologist avant la lettre specialised in the works of the Imagination (or creativity) and of literary creation, and a great literary critic (who, among other achievements, rescued Hamlet from the trash-can of literary history). Wordsworth gave us in The Prelude a whole treatise on the making of the poet, and Coleridge gave in his prose work Biographia Literaria an even more extensive exploration of the same topic. Some of his passing remarks have become key concepts in current culture: the notion that when we read Literature we ‘willingly suspend our disbelief’ comes from a remark in Biographia about Wordsworth’s ‘Preface’ to the Lyrical Ballads.

The question of Coleridge’s source of income must also be considered for, as I have been arguing here, although Romanticism creates a literary market that enables authors like Walter Scott or Lord Byron to invent the very idea of the best-seller, it also depends on leisure afforded thanks to rents or, in this case, patronage. Coleridge abandoned his duties as a Unitarian minister (in 1798, when he published Lyrical Ballads, aged 26) because his friend Thomas Wedgwood provided him with an annuity. Wedgwood, credited today with possibly being the first British photographer (see http://scihi.org/thomas-wedgwood-first-photographer/), was the son of Josiah Wedgwood, founder of the world-famous pottery firm that carries his name. Josiah was a most gifted businessman but also a patron of causes such as abolitionism and his son, also named Josiah (Tom’s brother), continued the family tradition of offering patronage to some artists. Apparently, the annuity was withdrawn in 1812, following the outing of Coleridge as a drug addict (this is attributed to Thomas de Quincey’s Confessions of an English Opium Eater but this book came out in 1821). There is an article (available from JSTOR) about the Wedgwood annuity but this is more detail than I can supply here. I simply don’t know, then, how Coleridge survived after 1812 but my guess is that Tom still helped, and other friends. I don’t know either what the arrangement was with the ultra-friendly Dr. Gillman. Interestingly, patronage used to be regarded as a potentially humiliating relationship of dependence–hence the word ‘patronizing’–but is now back with crowdfunding and platforms like Patreon. Today, I’m speculating, Coleridge could have made a living in this way, though he could also have been offered an academic position as resident poet, or creative writing teacher. Remember he had no degree and could never have become an Oxbridge don.

Coleridge’s private life was not very happy–or, rather, it was rich in friendship but not so rich in women’s love. He married in 1795, aged 22, a girl called Sara Fricker simply because his good friend Robert Southey (the poet) had married her sister Edith. Both couples intended to found a utopian project in Pennsylvania called Pantisocracy, but the mad scheme simply collapsed. Sara and Samuel had four children and separated in 1808, when he was 36. She lived with her sister’s family and later with her son Derwent (check https://wordsworth.org.uk/blog/2017/07/29/romantic-but-hardly-romantic-sarah-frickers-life-as-coleridges-wife/). They never divorced.

It is odd to think of Sara struggling to make ends meet while her husband enjoyed the beautiful English landscape or stayed away for one year in Germany, all with the Wordsworths. Their baby Berkeley died while the father was away and he did not return home for the funeral. The elder, Hartley, was a constant problem for her parents. I should have thought that Dorothy Wordsworth was Samuel’s secret love, and the most evident way to bond with William beyond friendship but, apparently, Samuel fell in love instead with William’s sister-in-law, Sara Hutchinson (his wife Mary’s sister). Actually, this happened in 1799, before William married Mary, and the unrequited love story continued for many years. Sara also lived with the couple (and with Dorothy) until her death in 1835 and there was much occasion to meet. She was a good friend to Samuel but, for whatever reason, she never returned his love (see https://wordsworth.org.uk/blog/2017/11/01/sara-hutchinson-coleridges-asra/). She never married. Samuel died, in 1834, having engaged in no other significant relationship with a woman.

Samuel Coleridge did not have a very high opinion of himself. He refers in Biographia Literaria to his “constitutional indolence, aggravated into languor by ill-health; the accumulating embarrassments of procrastination; the mental cowardice, which is the inseparable companion of procrastination, and which makes us anxious to think and converse on any thing rather than on what concerns ourselves”. His bouts of depression and the constant effect of the drugs (and of the many attempts at withdrawal) certainly could not have helped to develop steady work habits but he was certainly a far more laborious individual than he credits himself for. Under the Wedgwoods’ patronage he spent that frantic year in Germany, furnishing his head “with the wisdom of others. I made the best use of my time and means; and there is therefore no period of my life on which I can look back with such unmingled satisfaction”. He took lectures in diverse universities on an astonishing variety of subjects as he improved his German. And he never stopped learning, which is why Coleridge had opinions on all subjects. He comes across, in short, as a man in intense conversation with himself, of which the rest of his contemporaries were witnesses rather than participants (except Wordsworth for a time). We possibly have in his writings only a mere fragment of what his mind could do.

I haven’t yet mentioned any of Coleridge’s poetry. I’m still processing Iron Maiden’s fifteen-minute-song based on ‘The Rime of the Ancient Mariner’, and the heavy-metal crowds singing the lines in a concert (check the video on YouTube). Amazing, really. Also, the wonder of listening to Benedict Cumberbatch read “Kubla Khan”. That’s the beauty of today’s digital world: it offers much more than kitten videos and ranting if you only care to seek it.

Coleridge would have loved the internet since he was, in a way, his whole life a student–an academic outside academia, so to speak, and not only a poet. He led a precarious life on the financial front and his body kept his mind chained to drug abuse for long years. Even so, he managed to produce extremely relevant literary and intellectual work out of insatiable curiosity. This is why it is so painful to read the many comments that accompany the videos on the Romantics on YouTube.

Not the Iron Maiden video, which everyone watches for pleasure, but videos such as Peter Ackroyd’s BBC mini-series ‘The Romantics’, which many students watch as compulsory homework. A man, as disappointed as I am by the rejection of education, bemoans the ‘lack of intelligence’ of the students who complain that Ackroyd’s series is boring. An irritated college student replies that not enjoying something does not mean that you’re not intelligent. I agree: it means you’re not curious–and this is the most common curse today. The albatross around the necks of most students. Coleridge, as his year in Germany shows, was immensely curious. Luckily for him, he had patrons that allowed him to take his curiosity as far as he could and, so, he connected ideas in new ways that have shaped our own world. I wonder what he would make of those who, given the chance to learn by their parents and all of society, reject it–though I think I know.

Romanticism was, let’s recall this, in rebellion against many traditional ideas but, as Coleridge’s case shows, it was a very well-read rebellion, passionate both in feelings and in thoughts. This is something to remember: education empowers individuals and, ultimately, changes the world. Boredom should play no part in this equation. I very much doubt that Coleridge was ever bored. Or boring.

I publish a post every Tuesday (follow @SaraMartinUAB). Comments are very welcome! Download the yearly volumes from: http://ddd.uab.cat/record/116328. My web: http://gent.uab.cat/saramartinalegre/


March 4th, 2019

I shared with my ‘English Romantic Literature’ class the video showing Jon Cheryl perform his musical version of William Blake’s ‘The Tyger’ (https://www.youtube.com/watch?v=cFexFkJwrAo) and also Michael Griffin’s song ‘London’ (https://www.youtube.com/watch?v=bAkEyFbGjTc) based on Blake’s eponymous poem. We agreed that both songs are cool and that, by definition, an author whose work can be enjoyed in this up-dated way is cool. Blake is, no doubt, cool as Shakespeare and the Brontë sisters are cool. Other authors are uncool, and I believe that William Wordsworth belongs to that class.

Julien Temple, who was once a cool Brit director (he shot many music videos for stars like David Bowie), made in 2000 a film called Pandemonium about Wordsworth and Coleridge’s friendship during the time of the French Revolution. Wordsworth was played by John Hannah (how uncool is that?) and Coleridge by Linus Roache (cooler!). The script writer was Frank Cottrell Boyce who later wrote the, definitely, very cool account of Manchester in New Order’s heyday 24 Hour Party People (2002). I haven’t seen Temple’s Pandemonium but an instance of how hard it is to make its subject matter cool is that, apparently, the end credits roll to the sound (or noise) of Olivia Newton-John’s song “Xanadu” (1980) which vaguely alludes to Coleridge’s “Kubla Kahn”. Viewers’ reviews on IMBD are mostly positive (despite the middling 6.6 average rating) and the film might be worth spending two hours of your life on seeing it. Yet, one of the most enthusiastic commendations reads: “A splendid effort which will likely be most appreciated by those into classical literature–particularly 19th century poetry”. This is like recommending, just to name a random first-rate movie, The Right Stuff (1983) mainly to people who are interested in the history of NASA. A movie either works or it doesn’t, and if it appeals to a highly specialised academic audience it doesn’t. A more candid viewer writes “With its utter disregard for the historic record, Pandemonium attempts to do for England’s greatest Romantic poets what Monty Python and the Holy Grail did for the Arthurian legends–but (sadly) without the wit or the humour”.

In Pandemonium, in any case, and also in their friendship, coolness fell on the side of Coleridge with Wordsworth playing second fiddle; he always seems to have been the kind of guy you know is not really into it even when you’re having the greatest fun together. The wonder is not that their friendship started, for opposites attract each other, but that it lasted for so long and that it was even retaken after a serious falling out. I very much suspect that without cool Coleridge–and most likely without Dorothy Wordsworth, the adoring sister–Wordsworth would not be Wordsworth as we know him today. He would be perhaps Robert Southey (who?).

Much of Wordsworth’s uncoolness has to do with his living to old age and in good health. I am aware that this sounds callous and that the Rolling Stones are living proof that one can be a youthful rebel well beyond youth: Mick Jagger and Keith Richards are both 76. If Byron and Shelley had lived to old age instead of dying in absurd, preventable circumstances at, respectively, 36 (infection caught from his dog) and 29 (drowned for sailing in bad weather), they would have probably behaved like Jagger and Richards. The problem with Wordsworth is that he only had that rock-star profile by association with Coleridge and, once he married his childhood sweetheart Mary Hutchinson in 1802, aged 32, he became the anti-Romantic myth: a steady family man. Even his fathering an illegitimate daughter ten years before, during his stay in post-Revolution France, announced that this is who Wordsworth was at heart. He was rash enough to embark on a passionate affair with a Frenchwoman called Annette Vallon, the pretty daughter of a barber-surgeon, but also prudent enough not to marry her when she got pregnant. He was, it seems, a responsible but detached father for the girl, Caroline, but she was kept apart from her English siblings.

Keeping a family of five children, a wife and a sister (Dorothy never married) on the money made by selling poems is not easy. To be precise, Wordsworth never really lived on his modest earnings as a poet. To be even more precise, Wordsworth mainly lived off rents generated by family legacies. His father, a lawyer, was the legal representative of an aristocrat and it was the money this man paid to settle a long-standing debt that generated the rents allowing Wordsworth to marry. Wordsworth, incidentally, had a BA from Cambridge and his family, specially the uncles that paid for his education after he was orphaned at age 13, expected him to become a parson. He, however, would take no profession. Only in 1813, at the tender age of 43, did Wordsworth accept an appointment as post-master and Stamp Distributor for Westmoreland, rewarded by a yearly stipend of £400 per year, which finally ensured the financial stability of his family. They moved then to a beautiful house, Rydal Mount, near Ambleside in the Lake District, where the Wordsworths lived between 1813 and 1850 (it’s now open to visitors). However, the celebrity Wordsworth who received there an endless stream of visitors was not the same man who had written the poetry he was known for but someone else, his mature counterpart.

By the time Wordsworth published Ecclesiastical Sonnets (1822) the transformation was complete. His daughter Catherine and his son Thomas died both in 1812–she in June, he in December–and this must have been a terrible blow, no matter how often we tell ourselves that in past times parents assumed that some of their children would die in childhood. In fact, Wordsworth took the position as a civil servant to make sure that his remaining three children could enjoy the best of lives. Yet something went amiss at the time in his poetical career, as most critics agree, because of his job. It took me a while to understand what exactly Wordsworth did. Anne Frey explains in British State Romanticism: Authorship, Agency, and Bureaucratic Nationalism (Stanford University Press, 2010, p. 55) that Wordsworth did have an office in town and performed numerous professional duties, though not those of a full-time job. “While certainly compatible with Wordsworth’s idea of himself as a professional poet, however”, Frey writes, “the job necessarily took away some time away from Wordsworth’s vocation”. Frey’s sly wording suggests that Wordsworth was not really a professional poet but she struggles not to reveal a basic fact: his poetry emerged from youthful leisure (no matter how hard he worked at his verse) and was far less compatible with an adult working life. In contrast, Blake managed to produce his poems after his daily work routine as an engraver was over, which does sound professional.

I came across a very illuminating article by Andrew Klavan (originally published in 2009 in Romanticon and reproduced here: https://www.city-journal.org/html/romanticon-13214.html) titled “Wordsworth’s Corpus Reflects the Growth of a Conservative’s Mind”. Klavan grants that “Wordsworth’s conservatism hardened as he grew into middle age, sometimes becoming small-minded”. In 1829 (he was then 59) he protested against the Catholic Relief Act which allowed Daniel O’Connell to be the first Irish Catholic to serve as MP. Wordsworth was a strict Anglican all his life and Anglicans like him feared very much the impact of Catholicism on politics and social life. He did not support, either, the 1832 Reform Act, the first to extend franchise among English men (though only within narrow limits). This is typical: the youthful supporter of revolution becomes an adult conservative when changes in family, personal and professional life make political, economic and social stability desirable. In even simpler terms: one becomes more conservative the more one has to lose. Klavan contends, nonetheless, that Wordsworth regained part of his revolutionary fire later on. In 1846, aged 76, he gave his support to the democratic Chartist movement, though warning that rioting would not help the cause. By then, of course, he was a gentleman pensioner of leisure finally free to indulge in his youthful ideals. And the times were no longer Romantic but Victorian.

Wordsworth was given in 1842 a Civil List pension of £300 a year; he resigned then from his position as Stamp-Distributor. Next year, 1843, he was appointed Poet Laureate, aged 73, replacing Robert Southey and after having received honorary doctorates (by the Universities of Durham and Oxford) in the late 1830s. In the last years of his career as a poet, at the height of his celebrity, Wordsworth worked on his massive autobiographical poem ‘The Prelude’ which was only published post-humously in 1850 by his wife Mary. Actually, Wordsworth started writing this autobiographical poem back in 1798, the year when, aged 28, he published Lyrical Ballads with Coleridge, and kept adding blank verse lines to it until it grew to 14 books, a total of 7863 lines. This does not mean that the poem covers Wordsworth’s whole life–as the title suggests, it deals mainly with its first decades and it is, on essence, a poem on the ‘Growth of a Poet’s Mind’ as the subtitle announces. There is complete critical consensus that ‘The Prelude’ is Wordsworth’s greatest poem but you should read the comments by readers at GoodReads before considering whether you want to read it. I must confess that I have failed to find a valid reason to go through so much verse and no, I’m not ashamed to make this confession even though I teach English Literature. Some other time, perhaps.

No Romantic poet is complete without an oddity in his biography and in Wordsworth’s case this is supplied by Dorothy’s constant presence. There were other three siblings (John drowned at sea in 1805) but she and William, born only one year earlier, seem to have been constant childhood companions until their father died in 1783. The girl, aged 12, and the boy, 13, were then sent to the homes of different relatives and were only reunited in 1795, when she was 24 and he 25. They never separated again, sharing their diverse homes even when William married Mary. Many have read their relationship as incest and a few sexist scholars have even blamed hysterical Dorothy for it, presenting her as a needy woman who hindered William’s path with her demands. This sexualized view of their siblinghood is, I think, plain silly and only reveals that sex occupies too much space in our minds. William and Dorothy were comfortable with each other, they shared many ideas and observations also present in his poems (as her journals have proved), and were perfect companions at a time and in a society when a man and a woman could enjoy friendship in total freedom only as siblings. Mary welcomed her sister-in-law to the family home and the couple took good care of Dorothy when, in the 1830s, she became an invalid. She died in 1855, outliving William by five years. It’s a bitter-sweet tale.

A surprised GoodReads reader sentences “Turns out I like ‘The Prelude’ a lot. But I still wouldn’t invite Wordsworth to a party at my place”–yet another sign of his uncoolness. Wordsworth might then be a category to himself: the kind of author you profoundly respect but do not enthuse about; the type you admire because you can see the man is making an effort. He is not Milton–I still haven’t met a person who would like to meet Milton for coffee much less at a party–but he is not either, definitely, Blake. He is Wordsworth.

Coolness moves in mysterious ways.

I publish a post every Tuesday (follow @SaraMartinUAB). Comments are very welcome! Download the yearly volumes from: http://ddd.uab.cat/record/116328. My web: http://gent.uab.cat/saramartinalegre/


February 25th, 2019

Tomorrow I’ll be introducing my class in ‘English Romantic Literature’ to the pleasure of discovering William Blake (1757-1827). I haven’t taught this course in fifteen years and, so, I needed to re-discover Blake myself, re-learn the basics I must transmit. Within limits, careful as usual not to let myself be carried away and use for three hours of lecturing five times that in preparation, or more. We lead hectic lives and even the most interesting tasks need to be restricted, or else risk producing no new research at all.

I’ll mention first a 1995 episode of The South Bank Show devoted to Blake, available from YouTube (for instance https://www.youtube.com/watch?v=Qvx0on0Hj2I). The documentary is conducted by novelist and biographer Peter Ackroyd, not by chance: he had just published then his well-received biography Blake, part of a long list that began in 1863 with Alexander and Anne’s Gilchrist pioneering work (of which more, later). The biggest surprise in this documentary is, no doubt, the presence of notorious American Beat poet Allen Ginsberg singing Blake’s poems as he plays a vintage harmonium. This, he explains, is how Blake would have presented his poems to an audience, since for him the figure of the bard of ancient times was essential. Funnily, even though Blake’s best-known works are Songs of Innocence and Songs of Experience, I had missed that the word ‘songs’ has a literal meaning. Leaving this aside, the documentary, about 50 minutes long, made me wonder what the point of classroom lecturing is in the times of YouTube and, generally, the internet. My lectures will borrow, after all, from online sources, including Wikipedia and Google Books. And of course, the simply splendid Blake Archive.

In my times as an undergrad there was no internet, strange as this may sound to current undergrads. I was very lucky, nonetheless, because having heard about Blake in some introduction to English Literature, I could see some of his original drawings in a stunning room of London’s Tate Gallery. This was in the mid-1980s, before Erasmus, when every girl student who wanted to learn English spent a year as an au-pair. A decade later, in 1996, ‘La Caixa’ staged a major exhibition of Blake’s works in Barcelona, which was a marvel to see. Nothing compares to seeing the originals but the Blake Archive (www.blakearchive.org)–founded also in 1996 as a joint international project by the Institute for Advanced Technology in the Humanities and now run now by the Carolina Digital Library and Archives–has digitalised practically everything by Blake which the ravages of time have respected. This is a great little miracle, considering that he made and sold very few prints of his major works and that his best-selling work sold about 30 copies.

Browsing through the Archive, I wished I could be free from the onerous task of assessing my students–I would gladly give all of them an A+ if they promised to read the Romantic poems I have selected for study and spend a few hours enjoying online wonders like the Archive. Honestly: how can an exam or any alternative exercise replace the joy of admiring Blake’s work? What can I possibly say that makes a lecture more exciting? I could, naturally, use my classroom time to show a selection of what is in the Archive (or The South Bank Show episode) but public sharing doesn’t work. Somehow, one must be alone to enjoy the feeling of personal discovery; ideally, the teacher’s task should only be pointing out where to find the best resources. On Blake or anyone else.

Some places where Blake is present are obvious (Wikipedia!), others unexpected. Three comments on the YouTube channel offering the documentary named the videogame Devil May Cry 5 as the reason why these persons where interested in Blake. As it turns out, in Capcom’s new release of their popular videogame, just launched this week, there is a new character called V, who is fond of quoting Blake. This is great but no novelty: William Blake often crops up in popular culture. For instance, he is a central element in the first Hannibal Lecter novel by Thomas Harris, Red Dragon (1981), made into a film as Manhunter in 1986 and later again in 2002. Harris’s serial killer (not Lecter but another man) is so obsessed by Blake’s series of watercolour paintings (1805-1810) for the Book of Revelation that he has a tattoo of the red dragon covering his whole back (he even tries to eat Blake’s original). Check on Google images of English actor Ralph Fiennes made up in this way. I wonder what Blake would think!

The South Bank Show episode does not explain why William Blake, an obscure artist few knew in his own time, has become such an ubiquitous presence. In fact, Blake is remembered because of the biography by Alexander Gilchrist, which I have named before. A reference in the Wikipedia page led me to an excellent article by top-rank biographer Richard Holmes, actually a segment of the introduction to the 2004 re-issue of Gilchrist’s work, The Life of William Blake: Pictor Ignotus: “Saving Blake” (The Guardian, https://www.theguardian.com/books/2004/may/29/classics.williamblake). ‘Pictor Ignotus’ means ‘unknown painter’ and we must wonder why publisher Macmillan decided to issue a volume about someone who had been largely forgotten by the mid-19th century, with the exception of some keen admirers. Yet, this is how Blake survived into our times.

The story is worth telling, if only briefly. Gilchrist, born one year after Blake’s death, was a trained lawyer but also a budding art critic. He published a biography of minor artist William Etty before embarking in the two projects that articulated the rest of his brief life: his marriage to Anne Burrows and his work on William Blake–whom he discovered accidentally thanks to a second-hand copy of The Book of Job. Gilchrist’s subsequent research passed through interviewing people who had met Blake, and others interested in him, among them the leader of the Pre-Raphaelite movement, Dante Gabriel Rossetti–a collector of Blake’s work. No wonder, since Blake had to appeal necessarily to the neo-Medieval spirit of the Pre-Raphaelite Brotherhood. Gilchrist succeeded in completing his investigation and signing the contract with Macmillan but he died of scarlet fever passed on by his daughter. His distraught wife Anne, a major collaborator in her husband’s work, completed the manuscript, attributing to herself only editorial tasks rather than co-authorship. William Rossetti, Dante’s brother and a major art critic, endorsed the biography, which found a receptive audience. This success started the process of canonization by which Blake eventually became studied both as an artist and a poet, and also his seeping down into popular culture, with the infinite lists of allusions.

Gilchrist’s many sacrifices to rescue Blake from oblivion raise an important issue: would we remember Blake without him? Or would, inevitably, someone else have fallen in love with his artwork and rescue it? How many other obscure artists are waiting to be rescued in similar ways? And how come that the Pictor Ignotus of a time can be the star of a later time? It is usually claimed that this happens because some artists are ahead of their times but in Blake’s case this is a peculiar stance. Blake is perhaps best explained as a belated Old Testament prophet rather than as a modern artist, though it is true that his Romantic pledge to follow his own course rather than the art of his time, and the niche he carved for himself as a unique engraver using his own technique of relief engraving, make him closer to us. He was his own person, and this is something we appreciate. As for his heavily religious writing, we tend to downplay it (and woefully misread it), preferring to enjoy on the whole the mystery of his muscular figures and his alluring, vibrant colours.

Here’s a pocket biography. Blake was the child of a middle-class Soho hosier, attended briefly school as he was a difficult child, and was next home-schooled by his mother. Between ages 10 to 14 he attended drawing school, while he continued his domestic education by reading voraciously (the Bible was a central text for him, also John Milton). At 15 he was apprenticed to engraver James Basire, formally becoming at 21 a professional engraver, even though he was always employed by others, mainly as an illustrator. He married Catherine Boucher in 1782 and the pair enjoyed a happy union for 45 years, only flawed by the birth of a stillborn child and Kate’s subsequent inability to bear children. She was a most valuable collaborator, to the point that Blake trained her as a fellow engraver, caring besides for her husband on the domestic front with no complaint about their poverty. Both worked very hard to turn Blake’s visions and ideas into the illustrated books that transmitted them to posterity (thanks to Gilchrist!). Incidentally, Blake and Kate spent their lives mainly in London, and appear not to have travelled at all (or very little).

Blake had proto-anarchist ideas, which we celebrate today. He defended that individuals should be free to enjoy life without being fettered by any tyrannical Government or Church. According to him, personal evolution should be encouraged, sexuality fully explored, the body respected as a source of perception indivisible from the soul. Because of these tenets we trick ourselves into believing that Blake is of our times, which he was not. The man constantly had, since age 4, visions of God, angels, spirits, the dead and even the Devil–that was the reason why he spent such short time in school. Most contemporaries believed him mad, whereas now we tend to call him depressed or, less gently, schizophrenic. Actually, he had the kind of self-mythologizing imagination that others like J.R.R. Tolkien also possessed with the difference that Blake drew no separation between rationality and his visions. He was not insane at all, just a man comfortable with a kind of mind we now call pathological but that used to be called mystical. Perhaps only Biblical New Agers can truly understand Blake. A New Age approach, however, is not encouraged in our ultra-rational Literary and Cultural Studies.

In many senses, therefore, we profoundly misunderstand Blake. He is, among the artists we insist on calling Romantic, possibly the most resistant to science, having made of Newton his main nemesis. In Newton’s mechanicist universe there is no room for spiritual visions, which have been denied by science since the Enlightenment. As a child of the 18th century, Blake seemingly sides with the writers of Gothic fiction, who claimed there must be something beyond stark reality. The difference is that whereas they imagine evil monsters– frequently explained as illusions rather than actual supernatural occurrences–what Blake imagines is not scary but comforting. He claimed to speak with his dead brother Robert on a daily basis, in the same way widowed Kate later claimed to speak daily to him once dead. Blake is an in-your-face example of a pre-Enlightenment imagination which is fully aware of Enlightenment rational restrictions, in a way that his Medieval predecessors could not be. It was easy to call him madman, but also convenient because accepting that his visions were not a product of disease would be too scary–too Gothic!

Tolkien wrote that although he had been fantasizing about Middle-Earth for as long as he could remember, he had no notion of having invented any element in it: when he wrote he felt as if he was being told what to write. Though a strict pro-establishment Catholic, and not an anti-establishment Dissenter like Blake, Tolkien also turned belief into mythology. I’ll argue, then, that individuals with a strong sense of belief are more prone to accepting the existence of other universes, which rational Enlightenment denied. This may sound like something borrowed from Carl Jung but I truly think that adamantly denying other possible universes is… irrational. I’m not myself a believer in God the patriarch but I do suspect that we live in just one of many possible multiverses, a view many scientists support today in view of what quantum physics is teaching us. We make enormous efforts to convince ourselves of the coherence of our world-view but perhaps individuals like Blake–and the many others after him that tap directly into their imaginations to create the parallel universes we enjoy in fiction–are simply quite at ease with the idea of this numinous elsewhere. We fear monsters as children and are taught to suppress that fear as adults but I always say that seeing an angel would be far scarier than seeing a monster, particularly if you’re not a believer. This is why we need to convince ourselves that Blake was a lunatic, though one whose art is wonderful.

Teaching the basics of any artist’s work is, then, reducing a person to trite, manageable slogans. Once a madman, later Pictor Ignotus, then a Victorian favourite and currently both canon and legend, William Blake reminds us that we cannot condense any living person, and much less an artist, into a matter for two lectures, a Wikipedia entry, a documentary, or a biography–no matter how enthusiastic. Yet, this is how we learn and teach: hurriedly, in little pills, and trusting that one day students will have more time to take pleasure in names like Blake rather than just take credits for a course.

I publish a post every Tuesday (follow @SaraMartinUAB). Comments are very welcome! Download the yearly volumes from: http://ddd.uab.cat/record/116328. My web: http://gent.uab.cat/saramartinalegre/


February 18th, 2019

The volume that interests me today is a novel: No Mean City (1935), ‘the classic novel of the Glasgow slum underworld’ as the cover of the Corgi edition announces. Apparently, this novel has its origins in the short stories written by Gorbals unemployed baker Alexander McArthur. They were polished for publication by journalist H. Kingsley Long, a choice made by the original publisher, Longmans, Green & Co. The middle-class target readership might explain the unique narrative style of No Mean City, which mixes melodramatic, violent action with pseudo-ethnographic comments on working-class life in Glasgow’s most notorious neighbourhood, the Gorbals, between 1921 and 1930. Despite the abundant Lowland Scots dialogue this does not feel like a novel primarily addressed to Scottish people, though it might well be that I’m mistaken and that the patronizing tone adopted is intended to inform anyone outside the Gorbals of its degraded social situation, whether they live in Glasgow, in London or elsewhere.

In essence, the plot concerns the efforts of Razor King Johnny Stark to maintain his reputation among the local gangs by getting involved in a variety of brawls, though the novel also narrates the failure of his brother Peter and of his friend Bobby to climb socially upwards beyond the Gorbals. McArthur and Long portray a lifestyle which is absolutely depressing for there is no way out of the violence, the squalor, the economic insecurity, and the general injustice that keeps these characters tied to their sordid background. The vicious circle depicted is easy to understand: poverty results in working lives that begin too early, with no chance of an education; boys and girls marry young and soon have too many children, which results in poverty like their parents’. Finding decent housing is simply impossible because slum landlords charge outrageous amounts for appalling accommodation–if that’s a word to be used in this context–with unhygienic bathrooms shared by dozens. Ill-health is general. Not even youth offers a respite. With no prospects at all, girls try to catch a husband as soon as possible to leave their exploitative, low-paying jobs and boys try to find in gang violence and heavy drinking the enjoyment which work does not offer. All this is well-known but it is still shocking to find it described in so much detail.

Critics and readers since 1935 have complained, precisely, that the detail is lurid and that the plot veers in the last third of the book towards the sensationalist–and I agree. The novel loses interest and quality the moment the marriage of Stark and Lizzie begins disintegrating and the authors show more interest in how other persons take part in their sex life than in why they live in that miserable way. The sub-plot dealing with Peter narrates how his budding political awareness–stimulated by his voracious but haphazard reading–results in his leading, though unwillingly, a more than justified workers’ protest. This ends up costing him his job and, hence, his chances of accessing the low middle-class. Yet, No Mean City is not at all a political novel, nor a text that seeks to denounce the situation of the characters in any way: it is just a vivid representation of a condition that seems to be impossible to solve; the authors demand no reaction from middle-class readers except curiosity for the human zoo that the Gorbals appears to be. They offer no pity for any of the characters, which is understandable in Johnny’s case but not so much in Peter’s and Bobby’s. Much as it happens in its main successor, Irvine Welsh’s Trainspotting (1993), the main aim seems to épater les bourgeois.

No Mean City came to attention again in 2010, in its 75 anniversary, with some controversy about whether it should be kept alive at all. An interesting article by Dave Graham (https://uk.reuters.com/article/uk-britain-glasgow/glasgow-fights-no-mean-city-tag-75-years-on-idUKTRE6042N520100105) explains that Johnny Stark’s “fondness for slashing his adversaries’ faces with razors” is still a problem today. As local police officer Carnochan warns “If you bring a child up in a war zone, you’ll create a warrior. That’s what we’re doing. I’ve been a cop for 35 years and I can tell you, you can’t arrest your way out of this”. Actually, Glasgow Police and the Town Council authorities have started a quite successful programme to if not eradicate at least to curb down the stabbings that have replaced the slashings (in my time in that city I learned that a ‘Glasgow kiss’ is a knife cut that opens both corners of the mouth…). The authorities are doing something quite simple but effective: have the gang members talk to each other. Most boys simply do not know why they are perpetuating a type of patriarchal masculinity that only finds satisfaction in hurting other equally disempowered young men, and women, and talking seems a good way to start deconstructing it.

Two issues caught my attention in particular when reading No Mean City. One is the ambition for ‘reflected glory’ that leads women like Lizzie to encourage men like Johnny onto their violent path, regardless of the dangerous consequences. The women were (and are) mostly the victims of Johnny and his ilk and, indeed, in the novel they are beaten and raped as he pleases. Yet, they are loyal, though it is also true that only as long as it is convenient. To my surprise, Johnny’s mistresses, even his wife, take other lovers without concealing this from him; Stark is so certain that his reputation will attract other girls that he does not care for any in particular (except briefly for Lizzie). The women may be disempowered in this patriarchal regime but the authors remind us that they have some domestic power derived from the unstable economy: the men are often unemployed and depend on their women; they feel, however, no qualms to let the girls pay for drink, entertainment, or household expenses–even for their upkeep. There is a kind of equality combined with inequality, though it is also evident that the couples’ social standing depends on the husband, which is why the wives are constantly judging (to their face) whether they are ‘manly’ enough to get better jobs.

The other issue is ‘the impossibility of imagining something better’. There is an obsession in No Mean City for specifying how much money each character earns at each job they take, accompanied by frequent comments on how being on the dole often pays better than taking the worst jobs. The working classes are presented as tremendous snobs that classify neighbours depending on their unkempt looks and clothes with more precision than any middle or upper-class person might use. At the same time, the jobs the characters aspire to are a limited selection–the best-paid men are Lizzie’s lover, a foreman at the bakery where they work, and Peter’s father-in-law, an usher at a ‘kinema’. Bobby manages to earn quite a nice amount of money as a professional dancer, partnering with his girlfriend and later wife Lily but their private lessons also include sexual services for the richer patrons. Not only upper middle-class professions, such as medicine or the law, are totally absent from the horizon of the Gorbals’ people but also the professions by which many working-class individuals have improved their lot: primary school teaching or nursing for women, office work or specialized positions as mechanics for men, among others. Blue-collar life is not even guaranteed in the Gorbals, though it is obvious that those with higher aspirations (mainly Peter) are trying to copy more affluent neighbours. Of course, those with no chance of upgrading their lives hate the better off workers.

School is never mentioned, either, in No Mean City and this seems to be a glaring absence. Social upward mobility was (and is) encouraged in working-class schools by teachers: to begin with, theirs might be the first and only example of employment based primarily on mental work that working-class children ever see. Things have changed very much since the 1920s of this novel but I can tell first hand that contact with teachers, particularly those in possession of university degrees, is primordial in awakening the imagination of the less privileged children to social mobility. In middle-class families this is very different: children are surrounded by relatives with socially respectable jobs and the family’s income allows them to take a higher education for granted. This does not mean that middle-class children do not face any battles but it means that they needn’t face some battles. There is an abyss between a child who wishes to be, say, a lawyer in a middle-class family and who perhaps has lawyers in the family and a child with the same vocation whose parents are constantly in and out of employment (even always out) and who, besides, knows no one with a university degree.

Obviously, primary and secondary school teachers are also often the ones to help children see beyond their family’s horizon of expectations, suggesting specific professional training, further education and even careers. Apart from them, working-class kids with a wish to pull themselves up by their bootstraps started dreaming of a chance to leave the Gorbals–or their local equivalent–thanks to each new 20th century media. Movies, the popular press, radio, television, the internet, etc… have made the representation of desirable middle-class lives constant in the cultural panorama of the working classes. Some may bemoan that the glorification of the middle classes has destroyed working-class culture but, as the authors of No Mean City claim, the truth is that, given the choice, workers prefer being middle-class. This is what Marx and Communism, generally, woefully misunderstood.

What I’m saying is that, though this may sound trite, No Mean City has taught me again a lesson I had forgotten: you need imagination to leave a working-class background behind, and this must be awakened somehow. We take it for granted that social upward mobility is there for the taking but it is not and though consumerism seems to fulfil the function of stimulating an urge for what the Victorians adored, namely self-improvement, this is still very limited. I’m well aware that in 2019 we are at the end of an attempt to allow the working classes to change their prospects thanks to the welfare state. Even the children who got university degrees are unemployed or find only bad-quality employment, while the children of the upper classes continue enjoying privileges based on their families’ networking as the middle classes are destroyed. I hear no one, however, truly discuss how social mobility works, if at all, and there is total silence about children born to affluent parents who end up being working-class by income, if not by background. There is much talk about how the current generation will have a worse life than their parents but the issue is not addressed from the point of view of how much actual upward mobility there has been, in Scotland or anywhere else.

If you ask me, I’d say that very little–the upper classes have noticed that too many working-class individuals have dared imagine a better future through education provided by the welfare state. The way to limit those dreams is by a) cutting funding for public education; b) putting as many obstacles as possible in the way of publicly-educated persons; c) forcing families to spend so much on housing that nothing is left for improving the chances for their children. And d) pretend anyone can become an overnight instant success on YouTube, Instagram, etc while allowing 1% of the population to enjoy 99% of all wealth on Earth.

Imagining a better future might not be enough but it’s a beginning.

I publish a post every Tuesday (follow @SaraMartinUAB). Comments are very welcome! Download the yearly volumes from: http://ddd.uab.cat/record/116328. My web: http://gent.uab.cat/saramartinalegre/


February 11th, 2019

I’m in the middle of reading Jon Savage’s Teenager (2007), a study of how youth was socially constructed between 1875 and 1945 in the USA, the UK, and some other European countries. We usually assume that ‘teenager’ appeared in Western culture in the 1950s but the first thing Savage’s volume teaches is that this word actually started being used in 1944, in the USA, as a sort of harbinger of what youth would be like after an Allied victory in WWII: a time to enjoy yourself, and all the new pleasures of total consumerism, no matter what class you belong to. I remain, in any case, puzzled and amused by how the English -teen suffix was used to create the age category 13-19 quite artificially. Today there is talk of ‘teens’ and ‘tweens’ (tweenager!) for 10-14 kids, though I’m not sure to which category the 20-29 young adults belong to anymore. I don’t hear the word ‘adultescent’ so often these days, perhaps because now everyone seems to be a ‘millennial’ up to 35 (and young up to 40!).

I would say that in Spain we use predominantly ‘pre-adolescente’ (10-14) and ‘adolescente’ up to 21, which agrees more or less with the old legal majority (this was changed down to 18 in 1978). As happens, the word ‘adolescence’ is also an American creation: the contribution to our essential vocabulary of psychologist G. Stanley Hall (1844-1924), a staunch admirer of Sigmund Freud. His book of 1904 (vol. 2, 1907), Adolescence: Its Psychology, and its Relation to Physiology, Anthropology, Sociology, Sex, Crime and Religion is the first instance of the use of this key word.

Before the invention of adolescence, Savage explains, childhood just ended in adult age, around 18, when youth began (the Victorians did use the label ‘young adult’, now used for different purposes in YA fiction). For Hall, childhood ended, rather, at 13-14, with puberty, and adolescence between 21-25 (presumably when you were ready to marry). One thing that bothers me is that although Latin adolēscēns means ‘growing up, maturing’–hence its use by Hall to define the transitional period from childhood to adulthood–it also meant originally ‘lacking’, which is where the Spanish verb adolecer comes from. The RAE dictionary warns that this verb means “having some sort of defect or suffering from some malady” and not “lacking” but the point I’m making is still valid: an ‘adolescent’ is, whether Hall intended it or not, an individual missing an indefinite something–arguably maturity. I’ve never really liked the word for that reason: it seems awfully patronizing to me. Even ageist, in current parlance.

It must be recalled that childhood is actually a late 18th century invention, fully established in the Romantic period (or Regency period, if you prefer it) and that, of course, the cult of youth is a product of the same era. Before that time, basically the ages between 0 and 17 were seen as a long preparation for adulthood, which could start as early as 10 (or earlier) for working-class children employed full time, apprenticed in some cases already at 7. In the early 19th century adulthood, then, was assumed to begin as soon as an individual entered the marriage market: around 16 for the girls and 20 for the boys. Naturally, the possibility to enjoy childhood and youth would depend on each family’s income–in upper-class families, the girls would also be considered marriageable adults by 16 but the men enjoyed a far more prolonged youth, including a university education, travelling and perhaps professional training (in business, the law, the military, or politics) up to the age of 30.

From my constant repetition of the word ‘marriage’ and similar, you might get the impression that weddings used to be the main rite of passage into adulthood–or a specific age barrier in their absence: if, as a woman, you were not married by 30, and, as a man, you were still single by 40, then you became officially a spinster or a bachelor, that is to say, a celibate adult. But I digress. Actually, the factor that introduced all the changes in the way age is socially constructed is education.

It seems quite clear from Savage’s comments that childhood was invented when the need for a prolonged primary education was understood (first by upper-class families, eventually by the British state in the 1870s). Likewise, the invention of adolescence is a by-product of the American high school system. Obviously, the biological changes leading to puberty have been a constant in the life of Homo Sapiens for many thousands of years but how each culture reads them varies enormously. In American culture, puberty started to overlap with secondary education at the turn of the 19th century into the 20th and, so, Hall could come up with the idea that, in essence, an adolescent is someone being educated beyond primary school, and up to college graduation (even MA level).

Besides education, pleasure took centre stage. The four decades between 1904 and 1944 gradually established a new understanding of youth, based on a sense of entitlement to pleasure (for boys and girls), beginning with the upper classes. Young people were socially powerless, which is why (mostly the men) had to go through the generational massacre that was WWI; they reacted against this appalling patriarchal abuse by getting rid of their late Victorian and Edwardian shackles. I still marvel that couples courted up to the early 1920s in the presence of a chaperon or that parents could choose dates for their daughters when the concept was invented in America. We are not fully aware of what the 1920s supposed in terms of a youth revolution which was possibly deeper in many senses than the 1960s by comparison with what came before, though, of course, limited to a social elite. The post-1929 Depression decade of the 1930s seems sedate and conventional by comparison. I need not explain what WWII did to the young all over the world, specially the men.

The novelty of the late 1940s to mid 1950s is that the new ‘teenager’ could be found in any social class, whereas it seems to me that the adolescent is, in contrast, a middle- and upper-class figure. To be an adolescent you need a certain educated sensitivity and leisure to ponder in true post-Romantic fashion the unfairness of life and of the adults around you. If you’re young but busy working eight to ten hours a day, you may still possess that sensitivity but far less time to engage in self-centred adolescent thinking. What you do is reinvent the concept of leisure and transform it into the time when you enjoy your hard-earned wages, either in imitation of what richer kids do or generating your own working-class version of fun, quickly catered to by the entertainment industry. Hence, the teenager.

Savage hardly ever takes into account how different youth and adolescence has always been for boys and girls–this is my main complaint against his book. Yet, apart from the constant difficulties to fix age boundaries for each period of life since the late 19th century, Savage highlights a recurrent problem: society’s inability to control unruly young men, particularly of working-class background, whether they’re called teenagers or adolescents. Many complaints against the gangs of uncontrolled, second-generation, Irish or Italian youths in early 20th century America are a dead ringer for similar fears of non-white gangs in Britain now in the 21st century. This connects with my previous post about Dick Hobbs’ Lush Life: Constructing Organized Crime in the UK, a book in which he presented working-class male youth as a phase of unruliness before the acceptance of adulthood set in. Or boys will be boys, and the rest of us must put up with them, beginning with girls their own age.

What tends to be forgotten in most studies of youth is that the idea of youthful rebellion is specifically masculine: the late 18th century and early 19th century was a time of intense masculine revolt against patriarchy, in the most traditional sense of the rule of the father. The French Revolution of 1789 and the Napoleonic Civil Code of 1804 resulted in new legislation that, while still binding women closely to their male legal tutors (father, husband or even son), allowed young men much more leeway than in the past. Fathers used to have total authority over sons, including matters of career choice or even marriage. Young men of the Romantic period and later steadily eroded that authority at the cost of eventually having to accept a loss of their own patriarchal authority when they became fathers. This seems on the whole positive but has an underside.

In short: the unruliness of young men is the collective price we are paying for diminishing the total authority of the patriarchal father. Western society has failed to find a better replacement for that authority–or, found it but lost it. Gentlemanliness worked for a while as a desirable way of having young men stick to a positive masculine ideal that did not undermine their personal autonomy; yet, it was lost in WWI, and we don’t know how to appeal to unruly young men on the basis of principles that instil respect for others. Hence, the cycle of recurrent youth violence which Savage (and Hobbs) describes: the adult men who have become fathers after going themselves through a violent youth lack the authority to restrain their unruly sons–in the worst cases, they have never matured, do not participate in their sons’ education, or even celebrate the boys’ misbehaviour. This is why, I insist, we need to see adolescence and the teenager as heavily gendered social constructions, paying specific attention to how and why youth rebellion becomes anti-social criminality.

Youth, then, changed around the beginning of the 20th century to be re-invented by Hall, on the basis of the Romantic cult of youth, as adolescence–a time for personal introspection and the construction of the self in opposition to parents. It became next, Savage explains, beginning in the 1920s and culminating in the 1950s, a time for hedonism and the rise of the teenager. This was followed eventually in the 1960s and 1970s by sexual liberation. It seemed, then, with fourth-wave feminism demanding total equality, that the 1990s would be the beginning of the best of worlds for youth. Yet, the stories we tell in the 21st are either the sugary nonsense of John Green and company, or grim tales connected with social network horrors (do see Aneesh Chaganty and Sev Ohanian’s visually amazing film Searching… )

Perhaps adolescence and the teenager are no longer useful to understand how the young live and it is urgent to hear what they have to say about themselves. We just can’t wait to read about them in the History books written in the second half of the 21st century.

I publish a post every Tuesday (follow @SaraMartinUAB). Comments are very welcome! Download the yearly volumes from: http://ddd.uab.cat/record/116328. My web: http://gent.uab.cat/saramartinalegre/


February 4th, 2019

I will soon start teaching Mary Shelley’s Frankenstein and although the best time to revisit this classic was last year–the bicentennial anniversary of its original publication–2019 is also a good moment to re-read it, for it is the year when Ridley Scott set his masterpiece, Blade Runner (1982). Both novel and film are closely connected, since Blade Runner, though based on Philip K. Dick’s bizarre SF novel Do Androids Dream of Electric Sheep? (1969) is one of the myriad texts descended from Frankenstein. Mary Shelley was the first to ask, in earnest, ‘what if science could generate powerful monsters that could escape human control?’ and this is a question that frames Dick’s and Scott’s work. And our year 2019.

I have recently reviewed an article by a young researcher in which I found some confusion regarding the use of the concepts ‘post-human’ and ‘cyborg’, and I’ll use Frankenstein to clarify them, and then to proceed with some comments. Before I forget: I’m using the Oxford World’s Classic edition (the 2008 reprint) with my students but I was aghast to see that the prologue and the bibliography are the work of one Prof. M.K. Joseph who died in 1981. I immediately e-mailed the Literature editor at Oxford UP to suggest that they commission a new introduction by someone who truly understands how Mary Shelley’s mistresspiece connects with current, urgent issues, and, generally, with our science-fictional present. We’ll see if they answer.

Brian Aldiss famously celebrated in Billion Year Spree (1973) Mary Shelley as the mother of science fiction, stressing in passing that the Gothic narrative mode is one of the foundations of sf, at least of its more technophobic branch. Re-reading the novel now, at the beginning of 2019, and possibly for the fifth or sixth time (I lose track), a few things strike me as singular. One is that Mary’s tale is a frontal attack against male ambition but not necessarily a feminist text; the other is that she understood long before we had a name for it, what the post-human is.

The feminist question is obvious enough: Victor’s horrific ordeal is framed by the letters that explorer Robert Walton sends to his sister Margaret so that we see how useless men’s pursuit of glory, honour and fame is. The alternative lifestyle which Mary recommends is, nevertheless, one of sedate domesticity, in which women occupy a traditional position as dutiful, pre-Victorian angels in the house.

Margaret, the addressee of the letters by Captain Walton that frame Victor’s and the monster’s testimonials, stands for married bliss in safety and domesticity. So does Elizabeth Lavenza, Victor’s adoptive sister, and doomed wife as the monster’s victim; as such, she is the embodiment of the dangers that men bring into the peace of the hearth but also of total submission. Mary, the daughter of Mary Wollstonecraft, the woman who wrote A Vindication of the Rights of Women (1792), among which she placed education in a central position, never mentions Elizabeth’s right to attend university, as Victor and his friend Henry do. She is raised to be Victor’s wife and no event in the awful tragedy that unfolds diverts her from this path, even though she could have been much better company for Victor if only she had some inkling of his overambitious scientific pursuits. Mary Shelley simply offers no critique of the patriarchal script written for Elizabeth by his adoptive parents and by Victor himself, even though the author is adamant that there is something very wrong in men’s extra-domestic pursuit of glory and, using Barbara Ehrenright’s phrase, their ‘flight from commitment’.

I partly agree with Mary’s critique of the male sacrifice of domesticity–possibly what she endured as Percy Shelley’s wife–because it is often based on total selfishness. At the same time, I fail to see in which ways the world would be a better place if the many self-driven individuals (mostly men but also many women) had limited themselves to raising families. There must be a middle ground.

Reading David Grann’s excellent non-fiction account of British explorer Percy Fawcett’s suicidal search for the lost City of Z (the title of the book), I often thought that male wanderlust must be evidence of ingrained insanity. Yet, so many women also feel the drive to fulfil their ambitions even against all reason that it cannot simply be a matter of gender but something else that makes domesticity secondary. Why someone with small, dependent children would volunteer to travel to Mars, and possibly never return, baffles me, not so much because of the need to fulfil the dream but because of the aspiration to combine ambition and family. This is not, of course, Walton’s and Frankenstein’s situation, and perhaps what Mary Shelley was saying is that excessive ambition is incompatible with family life, and even with life. But, is this right? If she was imagining some low-key, pastoral idyll, as an alternative, she does not explain. At the same time, most often the likes of Victor are managing to create man-made horrors while keeping jobs and family well balanced, a possibility Mary does not contemplate, believing as she does that scientific discovery is a kind of youthful brain fever that overtakes everything else in the single individual’s life. Again: there must be a middle-ground.

How about the cyborg and the post-human? The monster that Victor creates is NOT a cyborg, for a cyborg is a creature, or person, whose body combines organic and inorganic materials. Donna Haraway had read sufficient science fiction when she wrote her famous 1985 tract ‘A Manifesto for Cyborgs’ to understand this, but it seems to me that very often students and scholars who use the word cyborg do not really know what they’re talking about, and simply assume that the word refers to any artificial creation.

Victor’s monster is artificial because he is not woman-born but he is 100% organic. Frankenstein discovers first the principle of life, ‘the capacity of bestowing animation’, and decides next to build a superhuman body–if that body is functional, then he will apply himself to re-animating ordinary human corpses. Since preparing ‘a frame’ is difficult because of ‘its intricacies of fibres, muscles, and veins’ he decides to work at a larger scale: ‘As the minuteness of the parts formed a great hindrance to my speed, I resolved, contrary to my first intention, to make the being of a gigantic stature, that is to say, about eight feet [2.40 m] in height, and proportionably large’. Mary wrote before DNA was known, and before the first transplant of a human organ was ever attempted, and we need to read this part of Victor’s research as a necessarily preposterous tale; yet, the main point is that he is not using magic but science.

Once the creature is made–and in its manufacture 20-year-old Victor is amazingly successful–Frankenstein is appalled to see that he is an ugly thing: ‘His limbs were in proportion, and I had selected his features as beautiful. Beautiful! Great God! His yellow skin scarcely covered the work of muscles and arteries beneath; his hair was of a lustrous black, and flowing; his teeth of a pearly whiteness; but these luxuriances only formed a more horrid contrast with his watery eyes, that seemed almost of the same colour as the dun-white sockets in which they were set, his shrivelled complexion and straight black lips’. Nobody has really managed to give an accurate pictorial representation of the monster, who does not look at all like the bolts-and-nuts version of Boris Karloff. Yet, I always say that Victor’s problem is that while he is a great anatomist and a wonderful surgeon, he is a disaster as an artist. A failure, if you wish, as a plastic surgeon. Had be been able to combine the features selected harmoniously, we would have a very different tale of celebrity, as everyone admires a beautiful being. As for his being a giant, well, being 7 feet tall is the foundation of Pau Gasol’s celebrity… The monster would be a highly valuable basketball player today!

Something that I missed in previous readings is how often the monster refers to ordinary human beings as another species, and also to himself. I am always correcting my students when they refer to the human race for we are a species (Homo Sapiens) and not a race, and I was surprised to see that the monster is well aware of this crucial difference. The name Homo Sapiens was coined by Carl Linnaeus in 1758 but this was long before any thought of evolution was contemplated by Charles Darwin (1809-1882); many have commented on Mary’s allusion to Darwin’s grandfather, Erasmus (1731-1802) as the scientist whose discoveries in connection to electricity may have inspired Frankenstein’s use of an engine to ignite the spark of life. Yet, to me, the monster’s awareness of species difference is far more exciting.

When he demands en Eve from his maker, the creature argues: ‘I am alone and miserable; man will not associate with me; but one as deformed and horrible as myself would not deny herself to me. My companion must be of the same species and have the same defects. This being you must create’ (my italics). Of course, I’m cheating a little bit, for Mary mixes ‘species’ and ‘race’ indiscriminately and, thus, Victor decides to destroy the female creature he is working on afraid that ‘a race of devils would be propagated upon the earth who might make the very existence of the species of man a condition precarious and full of terror’. He is horrified to see himself as the ‘pest, whose selfishness had not hesitated to buy its own peace at the price, perhaps, of the existence of the whole human race’. My point, though, is equally valid: Frankenstein is the earliest text to posit the possible replacement of Homo Sapiens with a man-made superior human species, that is to say, with a post-human species.

The difference between the cyborg and the post-human is, then, easy enough to understand: the cyborg has inorganic material in their body and cannot pass on any modification of this kind to their offspring; in contrast, the post-human is a different human species that will breed other individuals of the same species, and might wipe out Homo Sapiens if competing for the same environmental resources. As the Neanderthal disappeared, so might we, with the difference that this might happen out of our own mad shattering of the frontiers of science, if we go just one step too far and modify the human genome. Of course, neither Mary nor Victor knew about all this, but their ignorance is irrelevant (also an anachronism): the monster is a monster because we are terrified of the possibility that other humans might push us out. Victor, it must be recalled, manufactures not just someone who is big but also someone who is strong, extremely resistant to heat and cold, with an enhanced muscular capacity and, in short, far better equipped than Homo Sapiens to live on a radically post-human Earth.

The other novel I am teaching this semester is Jane Austen’s Pride and Prejudice (1813), published five years before Frankenstein. Indeed, Austen died in 1817, while Mary Shelley was busy writing her novel, as a young mother of the boy William. I never cease to be amazed that English Literature could accommodate in the same period styles in fabulation so thoroughly different. And I wonder what would have happened if Elizabeth Bennet instead of Elizabeth Lavenza had fallen in love with Victor Frankenstein, rather than Fitzwilliam Darcy. Or if Darcy had kept a secret lab at Pemberley. Possibly, some kind of literary short-circuit!

How lucky we are, then, that we can enjoy both Mary Shelley and Jane Austen.

I publish a post every Tuesday (follow @SaraMartinUAB). Comments are very welcome! Download the yearly volumes from: http://ddd.uab.cat/record/116328. My web: http://gent.uab.cat/saramartinalegre/


January 29th, 2019

I have just read two excellent volumes on organized crime in the UK, one by Alan Wright (Organised Crime: Concepts, Cases, Controls, 2006) and the other by Dick Hobbs (Lush Life: Constructing Organized Crime in the UK, 2013). Reading the last novel by Ian Rankin in the long John Rebus series, In a House of Lies (2018), I was struck by the idea that the main villain, Edinburgh’s top gangster ‘Big Ger’ Cafferty, is quite an independent operator. He is very different, despite his remarkable power, from the classic godfather figure first named in Mario Puzo’s novels on the Corleone family (and which Francis Ford Coppola famously adapted). Wright and Hobbs confirm this suspicion that there is not such a thing as a shadow crime syndicate with a rigid criminal hierarchy but, rather, a fluid, chaotic predatory business environment that appeals to violent patriarchal men. And that thrives with the complicity of consumers willing to purchase illegal goods.

Wright is a former officer with the Metropolitan Police in London, who later became a lecturer at the Institute of Criminal Justice Studies of the University of Portsmouth. Dick Hobbs is an urban ethnographer, and a native London East-Ender who, his web presentation claims, ‘came to academic work late having worked as an office boy, labourer, dustman and schoolteacher’. Although they come, then, from radically different positions–even possibly from enemy lines–Wright and Hobbs agree that we need to be cautious about the widespread idea of organized crime. Wright, who participated in the investigation of the crimes committed by the infamous Kray twins–whom Hobbs mocks as ‘marquee gangsters’–warns that ‘Because of the contested nature of the concept, the sense of organisation of each group needs to be understood within its specific social, political and economic contexts. Simplistic or one-dimensional theories simply do not work in this field’ (in interview with Woodiwiss & Telford 115). Hobbs goes much further, arguing that, basically, the only organized crime is the one being committed by liberal capitalism since the 1980s-1990s when it started destroying the industrial labour that allowed the working classes in Britain, at least for a very short time between the 1950s and 1960s, to maintain the illusion that a prosperity disconnected from illegality was possible.

Wright’s book is written from the side of the law and, though he is very much aware of how distorted notions of organized crime may even obstruct effective policing, he offers what Hobbs might criticize as a conventional, typically bourgeois view of criminality, in which the working classes are demonized. I must stress that Wright has also written extensively on what is usually called ‘white collar’ crime and that he more generally calls ‘business crime’, meaning the offences committed from inside the legal system. Bankia and the Gürtel scandals in Spain show, as we all know, that the higher your social position is, the bigger your chances are of committing massive fraud and theft. Since this kind of crime leaves in its wake much personal suffering, including suicide, but is not connected with bloody violence, the general public tends to downplay it, showing greater alarm in cases at a much smaller scale which do affect specific individuals as victims of physically harmful crime. At any rate, the downfall of Rodrigo Rato is useful to explain why organized crime as such does not exist; rather, criminal organizations are formed ad hoc, opportunistically, to take advantage of certain new chances of making an illegal profit. In Rato’s case this came though the exploitation of uncontrolled areas of banking; in others far more often stereotyped as organized crime, a positive change may bring in new criminal opportunities. For instance, the fall of the Berlin Wall in 1989 created an immense new market for drugs in Eastern Europe, closely connected with new patterns of conspicuous consumption. This meant an enormous expansion of the global drugs trade, which progressed as demand grew and not on the basis of a prior business strategy.

There are so many challenging ideas in Hobbs’ volume that it is hard to know where to begin. Perhaps with my surprise at the class-related seething rage that inspires the author who, while using an implied Marxist discourse, is also writing from the heart of his personal experience as someone born and bred in a working-class family–as I was myself. Hobbs is deeply annoyed that the in-your-face criminality of capitalism is not acknowledged as the breeding ground for the predatory (his word) behaviours so heavily surveilled by the police, the Government and the media. And he has a point, though I feel that Hobbs tends to downplay the harm done to specific victims who needn’t see their lives ruined by that kind of constant criminal predation.

Hobbs is very specific about when and why the phantom of organized crime was raised. 1991-2001, between the fall of the U.S.S.R. and 9/11, ‘was the decade when the internationalization of American law enforcement found favour after the cessation of the cold war had opened up political and security space in Europe, and organized crime began its rapid assent [ascent?] in importance within political discourse’ (24). American paranoia, the nation’s penchant for alien conspiracy theories, its racist xenophobia and ‘the conflicting moral orders of urban America’ (35) gave rise to the consolidation of the concept as a way for the USA to ‘attempt to establish a global hegemony in which law enforcement became little more than a front for a government-backed central casting agency, stereotyping both heroes and villains’ (37).

If you see Hobbs’ drift, it is also hinted that the establishment of terrorism as a common global enemy after 9/11 follows a similar pattern: the US Government casts whole nations and associations of rogue individuals in villainous roles that merit universal contempt, without looking at its own role in building the global policies from which the resentment expressed through the bombings emerges. And I’m NOT justifying terrorism any more than I would justify the violence of the intensely patriarchal world on which urban crime thrives. Here lies, actually, the main source of my disagreement with Hobbs: while I accept that it is wrong that classing ‘the transgressive tendencies and hedonistic drives that lie at the heart of so much urban life (…) into the emotive and politically charged category of organized crime implies a coordinated threat far more powerful, ominous, and extensive than is justified’ (38), I totally fail to see why we must put up with the brutal violence of illegality. Whether the threat comes from a world-wide mafia, supposing it exists, or from the individual mugger or rapist, the problem is the same: we live under the constant shadow of violence caused by the patriarchal endorsement of a sense of entitlement over bodies and minds which is not being addressed at all. The same applies to high-level corporate business.

Hobbs speaks of the ‘banal entitlements’ to the ‘lush life’ as the hypocritical source of much common criminality: the more affluent individuals promote ‘the ambiguity that is central to the urban milieu’ with their ‘demand for cheap goods and contraband, whether it be drugs, cigarettes, sex, or somebody to pick up the kids from school and do a little light dusting, that drives illegal markets, rather the administratively convenient demonic catch-all of organized crime’ (235). Having been an au-pair in Britain I know a little about the sense of entitlement to a good life that drives the not-so-affluent middle-classes to exploit others below them. Yet, I should say that there is an enormous difference between exploiting the craving for escape of drug users and forcing women and children to sell their bodies and be enslaved for that. A problem, then, which Hobbs does not address are the specific traits of the consumer market–in which exploitative men (not ALL men, but a certain kind) play a major role.

Hobbs claims, rather disingenuously, that although criminal culture is based on a clear machismo, women are also active in drug-dealing because it does not require the strength of other more violent types of crime (though they need to be protected by henchmen) and because the flexibility of the market makes dealers ‘unlikely to be lodged into a permanent niche of some rigid patriarchal hierarchy’ (155). Yet, he traces a very clear line of descent from working-class normative masculinity to post-industrial predatory patriarchy. In the past, when Britain was mostly white, East End ‘youth’ (not men) could enter normative working-class life ‘as a reward for ending their brief dalliance with deviant subcultures’ (126), and be absorbed into a ‘parent culture which offered, via local and familial networks born of long-standing settlement, unionized manual work in dock-related employment, in local wholesale markets, in the building trade, or any of the multitude of proletarian options over which the local white population had acquired come measure of control’ (126-7). Now that this safety net is gone, the macho cultures ‘integral to traditional work cultures’ do not fit the ‘low-paid, non-unionized, post-industrial service sector’ and are, thus ‘ideally suited to predatory illegal trading cultures’ (127).

In this sense, Hobbs’ strategy to defuse racism also falls into this paradigm of presenting working-class masculinity negatively. Before post-industrial Britain was born, with Thatcher’s mandates (1979-1991), the preferred concept was not organized crime but the ‘underworld’, ‘a construct commonly used to describe violent parochial networks of working class men active across a range of illegal markets’ (59) between the 1930s and the 1970s. Incidentally, this is a concept very much alive in Ian Rankin’s novels and which even ‘Big Ger’ uses. The ‘underworld’ was a mainly white, native milieu whose ethnicity was not commented on because it was assumed to be a non-issue. In contrast, Hobbs explains, each new wave of foreign migration has been connected, since the 19th century with the Irish, to ‘a contamination of indigenous purity whose degraded origins lie far away in a dangerous and unfathomable “Otherstan”’ (48), a racist strategy which disregards how native demand shapes illegal trade. When the different white ethnic groups (Irish, East European Jews, Italians, the Maltese and even the French) were replaced by non-white migrants (Asian, Caribbean, African) the same pattern was repeated, but with the addition of an increased racism. This was, of course, made far worse when the second and third generations, born in Britain, found themselves made socially doubly redundant by the post-industrial job scarcity. Their opportunistic criminality was then read (from the early 2000s onwards) as a sign of the re-organization of crime along imported, threatening youth gang lines.

Hobbs, then, strenuously defends that since the early 1990s the Machiavellian corporate market forces constraining British society have ‘enabled the normalization of individualistic and predatory relationships, and a whole range of illegal trades has thrived as a means of acquiring and transacting capital in the void left by legal employment, organized labour, and the enabling institutions of industrial culture’ (234). The criminal networks are not master-minded from above but, rather, formed on the basis of a common ‘general entrepreneurial habitus’ (166) needed to operate in the realm of ‘unlicensed capitalism’ (Hobbs 232); actually, the ‘fluid parameters’ of illegal trade follow ‘the chaos and fragmentation of deindustrialization’ (Hobbs 232). It’s, to sum up, a patriarchal Darwinian world at all levels, from corporate CEOs to teen street drug dealers.

I am then more than willing to accept that ‘organized crime’ is, like ‘global terrorism’, a convenient fiction articulated by authorities around the world, following the US example, to increase citizen surveillance. What I wonder is why both Wright and Hobbs, who show such great acumen, fail to see the elephant in the room: what lies behind the efforts to legally dominate and to illegally exploit persons all over the world is the same patriarchal sense of entitlement. This is not, I insist, common to ALL men, but to the minority that rules by means of legal and illegal violence, with the complicity of many women.

If, Hobbs argues, there is a direct link between the admiration of physical strength in working-class male culture, which connects the past respectability of the industrial worker and the present glamour of the young street gangster, this needs to be examined in depth. I would admit that the brutal violence perpetrated by the CEOs that have sent whole generations into the ‘no future’ announced by 1970s punk is the cause of the survival strategies implemented by those who live by illegal trade. But since I am not aware of any all-female criminal networks that traffic with male sex slaves, perhaps it is time to consider the issue of gender, most particularly the predominance of male patriarchal individuals in criminal circles.

Unless, that is, men are content to see working-class masculinity so generally equated with rampant violence, whether organized or chaotic. I hope not, for this is an insult to the many non-patriarchal good men in working-class families all over the world.

I publish a post every Tuesday (follow @SaraMartinUAB). Comments are very welcome! Download the yearly volumes from: http://ddd.uab.cat/record/116328. My web: http://gent.uab.cat/saramartinalegre/


January 22nd, 2019

My post today refers mainly to the article in El País, “La Universidad afronta la salida del 50% de sus catedráticos en siete años” (https://elpais.com/sociedad/2019/01/09/actualidad/1547044018_002135.html). As it is habitual in the Spanish media, El País mistakes ‘catedráticos’ (i.e. full professors) for tenured teachers (i.e. those with positions as civil servants until they retire, but not necessarily ‘catedráticos’). The point raised is the same, though. By 2026, 16200 of the current full-time university teachers will have retired (almost 17%) but, here’s the nub: the current hiring system will not allow to fill in the vacant positions. The Spanish university will dramatically shrink though, in view of the constant demand, it might have to offer in a rush a high amount of tenured positions. Most likely, as we fear, 2026 will be the date when many Departments might disappear.

Allow me to comment on some on some points raised by the article, and then on some comments by the always angry readers of El País.

Point 1: the average age for teachers in the Spanish public university is 54. This refers only to full-time tenured teachers for, as we know, the average age for part-time associates is much lower (but also rising towards 40 since no tenured positions are being offered). I am myself 52 and was hired full-time aged 25 (yes, 27 years ago), so I am of the privileged best-paid, best-positioned teachers that aspire to retiring before 2026 (I certainly don’t want to be teaching 20-year-olds when I am past 65). Whenever I read this kind of news, I feel guilty that I am so lucky and profoundly annoyed that my professional group is presented as unusually, or even unfairly, privileged. This is the trick that the Spanish Government (and many others around the world) have been using to antagonise the different generations: the problem is not that the young are being grossly abused (they are!!) but that we, the ageing parasites, cling to our privilege.

Point 2: Pedro Sánchez’s current Socialist Government does not want to offer “an avalanche of tenured positions” that might bar access to the following generations, as happened in the Orwellian 1984. What happened then? Well, 5000 teachers with five years of experience and a doctoral degree were offered tenure in quite accessible state examinations. This, it is said, was a serious error as a blockage was formed that prevented the next generation from accessing tenure. The information, however, is not correct. In 1984, the year when I myself became an undergrad, there was a massive influx of students with working-class backgrounds (me again) thanks to Felipe González’s Socialist policies. This influx made it necessary to improvise the hiring of the new teachers; at the time, nobody thought of an alternative to the tenure system because this is how the university traditionally worked.

By 1991, when I was first hired as a teacher, the system still ran quite smoothly: you were employed full-time, with the expectation that you would write your doctoral dissertation in three years, and next face the corresponding state examination one or two years later. I should have been tenured, then, by 1997 or 1998, at the latest. What interrupted the quite acceptable ratio of generational replacement was not the bottleneck allegedly formed in 1984 but the new restrictive policies by the conservative Government headed by José María Aznar, which started to brutally attack the public university by destroying its hiring system. Thus, to use my own example, I did between 1996 and 2002, when I finally got tenure, the same amount of work as a tenured teacher but on the basis of temporary, poorly-paid contracts, while I waited. In 2008 the full-time contracts to hire junior researchers, as I was in 1991, were withdrawn. Then started the agony of the system and of the individuals who, like me, only aspire to doing their best for the Spanish university. Incidentally: replacing 17% of all employed teachers in seven years is a very acceptable ratio below 3% each year. This should liberate money that would suffice to pay for new tenured positions, which would be anyway cheaper as teachers would not be receiving money for any extra merits as after a long career. As things are now, though, this is considered too much and here lies the main problem.

Point 3: the function of ANECA and the accreditation system. Since the university system no longer could absorb the junior researchers, for lack of tenured positions, the Government raised the amount of qualifications needed to apply for one about ten years ago. The agency founded to grant national accreditations, ANECA (and other regional equivalents) guarantees the possession of those qualifications but has also created a fantastic amount of frustration. El País reports that ANECA has certified that 15000 Spanish doctors qualify for tenured positions (both ‘titular’ and ‘catedrático’) but this is far more than it is offered. Once you’re ANECA-approved, the waiting can take many years, during which, if you’re an associate, you might easily be dismissed by your university. I see that many of my colleagues have started signing as ‘catedrático acreditado’ or ‘titular acreditado’, which, in my modest view, is very sad.

By the way: I totally disagree with the opinion that, when we retire, there will be no sufficiently qualified personnel. It might well be that the Spanish university goes up a few notches in the international rankings, since the patient ‘anecandos’ know very well how to be competitive. What I see is that the 70-year-olds will be replaced, at the rate we’re going, by 50-year-olds with waning energies, past their prime in some specialities which require the stamina of the 25-35 young. After a time of restrictions, in which only 10% of the positions occupied by tenured teachers could be offered again, the Government has finally allowed universities to replace all their teachers. Yet, without better funding, this cannot be done. What I say: many brilliant researchers now in their 40s will still have to wait long years for tenure. Only 2.3% of all current ‘titulares’ like myself (i.e. senior lecturers) are younger than 40. Of course, many researchers in the 40-50 bracket are hired rather than tenured, but, even so, the case is that students aged 18-22 are being taught by their grandparents’ generation!

Now, three comments from readers (there are 270).

Comment 1: some countries, a reader says, would take the chance as a “golden opportunity” to replace the “endogamic, stagnant” Spanish teaching body. Thank you very much on behalf of the generation currently doing our best to educate students who are amazingly reluctant to being educated and to do, besides, research at levels never known in Spain before the 21st century. It is extremely satisfactory to receive so much support from the society that we serve and to be told, besides, that anyone younger would be better prepared. By the way, dear reader: the article does not refer to the massive dismissal of currently employed teachers but to our retirement. We do expect to be replaced by much better personnel, of course, but the point the article is making is not that we should retire but that the younger generation should be employed in adequate conditions. Not the same.

Comment 2: who cares, a reader writes, if the public Spanish university disappears? There are not sufficient students, anyway, to maintain a “bunch of lazy, overpaid guys, while the mass of workers lives in miserable conditions”. Thank you again, on behalf of my colleagues and myself. The whole point of the 1984 university revolution was to guarantee the higher education of the working classes so that they could be critical with their life conditions, including employment, and socially mobile upwardly. The 2008 crisis was used to destroy the university hiring system following the same abusive economic policies that have reduced the life of those born after 1985 to a constant struggle to survive. I am well aware that I am a luxury but what we should be demanding is not an end to the Spanish public university but an end to all the ultra-capitalist policies that are making the rich richer and the poor poorer. You might say that a university education does not guarantee any upward social mobility (the upper classes have done all they can to hinder it) but imagine for one moment a Spain with only ultra-expensive private universities and a paltry scholarship system, possibly much worse than what we have now. How’s that an improvement on the lot of the working classes? The upper and the middle classes can choose between the public and the private university, either in Spain or abroad. But, how do you allow the talent of working-class individuals to flourish? Aren’t you interested?

Comment 3: (with this one I must agree). “Spanish society does not value research”, nor any merits attached to it. This is possibly the key to the whole matter: the comments elicited by this article show a colossal miscommunication between those of us who take university research and teaching seriously and those who, unaware of what we actually do (or in some cases rejected by the system), show enormous hostility at what they assume to be our privileged positions. Reading the comments you can see how the colleagues that try to explain our job face an adamant dislike, even hatred, based on immovable premises: we get tenure aided by a close circle of accomplices though we lack sufficient merits, and the little we do does by no means justify the enormous salaries we are paid. Of course, to someone paid 800 or 1000 euros a month, a salary of between 2500 and 4500 (these figures are public) might seem stratospheric. Also, the very idea of tenure. It is funny to see, though, that nobody disputes what football players, top models, influencers of all kinds and the CEOs that kills thousands of jobs at the drop of their hat are paid. Supposing, then, that in the next seven years a new generation is given tenure, this is what they’ll find: generalised resentment. Just what one needs to offer good teaching and progressive research.

We’re trapped, then, in a vicious circle: any defence of the Spanish university as a necessary public service and of their under-50 workers as unfairly exploited sounds to lay ears as a defence of privilege. I do acknowledged that some of my colleagues shamelessly abuse their positions but a) they are the minority and will be out by 2026, b) the same can be said about many other workers–we’re not saints, and nor is anyone else. The resentment poured on us is a product of envy, the ‘national sin’ as many call it, but also of the low educational levels in Spain. Germans, Britons or Americans do not seem to hate their university teachers, though they’re possibly only socially respected in places like Japan (my guess). Long gone are the times when being a ‘catedrático’ or a simple senior lecturer elicited respect and I keep no illusions about that. But why we are so misunderstood baffles me. Also, why instead of urging the Government to solve a situation that can be indeed solved with a minimum good will the solution offered is getting rid of absolutely the only institution that can bring some social change to our chronically backward nation. Unamuno’s ugly “¡Qué inventen ellos!” still has us in thrall.

I publish a post every Tuesday (follow @SaraMartinUAB). Comments are very welcome! Download the yearly volumes from: http://ddd.uab.cat/record/116328. My web: http://gent.uab.cat/saramartinalegre/


January 15th, 2019

[Warning: Spoilers ahead!]

I first heard about The Miseducation of Cameron Post (2012), a novel by emily m. danforth (without capitalized initials), and Simon vs. the Homo Sapiens Agenda (2015) by Becky Albertalli reading reviews of their film adaptations. The former, directed by Desiree Akhavan from a screenplay co-scripted with Cecilia Frugiele, has the same title as the novel. The latter, directed by Greg Berlanti and adapted by Elizabeth Berger and Isaac Aptaker, has a different title: Love, Simon. Both films were released last year, 2018. Miseducation won the Grand Jury Prize at Sundance, which is why it has attracted more critical attention; its IMDB rating is, however, only 6.7, in comparison to Love, Simon’s 7.7. Since I haven’t seen the films (yet), here I focus on the novels.

Both books are debut novels (winners of the William C. Morris Debut Award) originally published by Balzer+Bray, a HarperCollins label which specializes in young adult fiction. And both deal with the coming out of an American teenager. They seem to me, however, very different texts in style, content and approach. Miseducation is a literary novel, which is not surprising given the author’s training: an MFA in fiction (University of Montana) and a PhD in creative writing (University of Nebraska–Lincoln); she teaches creative writing and Literature (at Rhode Island College, Providence). In contrast, Becky Albertalli used to be a clinical psychologist specialized in children and teens before becoming a full-time writer. Her Simon is far less ambitious as a literary novel though, surprisingly, it made it to the National Book Award Long List (for Young People’s Literature). A major difference, and the source of much controversy, is that whereas danforth is a lesbian narrating the coming out of a lesbian teen, Albertalli is a heterosexual woman telling the story of how gay Simon comes out. Cameron’s story is rather bitter, Simon’s bubbly and happy.

Danforth’s novel has some autobiographical aspects, as she has granted, though she denies that Cameron’s experience mirrors her own. Author and character are natives of Miles City, in Montana (population a modest 8410), where the novel is mainly located. I usually read this as a negative sign: intense descriptions of one’s own small town in a debut novel tend to mean that the author has no other story to tell. We’ll see.

Danforth uses 470 very long pages to tell a rather simple story: Cameron Post is 12, in the early 1990s, when her parents die in a car crash–while she kisses a girl for the first time. Her unacknowledged, untreated sense of guilt prevents her from properly mourning them, and also from defending herself when she becomes the ward of her conservative maternal aunt Ruth. A heterosexual girl Cameron gets entangled with, when both are about 16, reports their first and only sexual encounter to her mother and, appalled, Ruth sends Cameron to a religious institution which offers conversion therapy (the novel’s implicit addressee is a progressive person, of course, and we know this cannot work). The last third of the novel concerns Cameron’s stay in this place, subjected to the increasingly absurd sessions with her bigoted therapist, Lydia, as she plots her escape with fellow sufferers Jane and Adam. Cameron eventually visits the site of her parents’ accident, finding closure for her mourning, though it is unclear whether the escapade with her new friends will come to a happy end.

This is a rather flimsy plot that could have been told far more efficiently in 350 pages, as many other readers have noticed. The prose is beautifully crafted but it often hinders the advancement of the scant plot. It screams at every page ‘look at me, I’m a sensitive, nuanced writer’, who learned her lessons well. Two caveats, then: I wonder why no editor cut this extra-long text and, more importantly, I wonder how much damage creative writing courses are inflicting. Reading Cameron this seems obvious: the subject matter asked for an acerbic style, less prettiness, and more insightful storytelling. Plot, tone and message end up muddled. I expected rampant villainy to colour the characterization of the obnoxious Ruth and Lydia but I was left instead with a confusing impression that they meant well but were misguided by their Christian values.

I have not read yet Boy Erased: A Memoir, by Garrard Conley, and the object of a yet another recent film adaptation (directed by the truly interesting Joel Edgerton) and cannot say how the memoir and the novel compare. Conley tells the story of his own religious conversion therapy, forced upon him by his father (at that time about to be ordained as a Baptist Minister). One thing I can say is that I learned practically nothing about this totally discredited way of ‘curing’ individuals of their own natural sexual inclinations reading danforth’s novel. She reduced this bizarre but important issue to the personal quirks of Ruth and, above all, Lydia, without providing in any way her young readers with information, and much less guidance, to resist being ill-treated in this way. This fuzziness was even more horrific to me than what they actually do, also because Cameron Post is very far from being a rebel in a way a real teenager might recognize. If the novel had focused more narrowly on the ugly issue of conversion therapy, it might work, but as it is everything gets diluted by danforth’s artistic ambition. My personal impression, then, is that this is a failed novel containing two possibly great novels: one about conversion therapy and the other about Cameron’s process of mourning–which in the end seems to be the main issue.

I also found in The Miseducation of Cameron Post much coyness in the treatment of lesbian sex. Once you read Sarah Waters, anything else seems coy but Cameron’s sexual awakening is so limited that you wonder whether the word ‘miseducation’ also extends to this. 1993 is pre-internet prehistory but, even so, Cameron seems vey little informed about lesbian sex. Her Seattle girlfriend, who boasts of being a progressive, well-connected lesbian, is not really much better informed. Whether you are a lesbian or another kind of reader, you are left pretty much in the dark about the many pleasures of this kind of sexuality. When interesting things finally happen, the encounter is terrible for Cameron, both in its development and its consequences. I wonder how many teen lesbian girls must have felt saddened and even scared, rather than encouraged, in view of this tepid approach and also because conversion therapy is not sufficiently described, or opposed.

Albertalli is much more fun but even worse at describing sex. She reminded me of J.K. Rowling in Harry Potter, and her awkwardly limited way of narrating the sexual awakening of the Hogwarts teens. I’m very much aware that Rowling is far, far worse since she completely excluded gay sex from Harry’s universe, a pathetic oversight which countless readers have corrected with their abundant slash fiction. Albertalli’s novel is quite different in that sense but her openly focusing on a gay teen does not mean that she is comfortable describing gay sex. The worst moment happens when Simon finds himself alone for the first time with his love interest (I won’t disclose the name, for this secret is the core of the novel). Believe it or not, they kiss and caress their naked chests as they lie on Simon’s bed. Yet, rather than masturbate each other, as one would expect of two 17-year-old gay boys (I think), Albertalli has each go to the bathroom separately. The words she uses are not very different from my own plain phrasing.

These are novels for young adults and the case is that adolescents–or teenagers, whatever you prefer–usually have their first full experience of sex (i.e. attempting to give each other an orgasm) around the age of 16 or 17. What Cameron and Simon do at that age corresponds to an earlier age, which is puzzling. Or one of the unstated rules of young adult fiction: discuss sex but describe it only coyly. Do I sound like an adult, heterosexual voyeur asking for some teen porn? I hope not! The point I’m making is that, in my view, the experience of coming out as narrated in fiction must be focused not only on acceptance by the corresponding social circle (or rejection, as happens to Cameron) but on the presentation of homosexuality as fun, pleasing and sexy. Sarah Waters does this–why can’t danforth and Albertalli do it? Are they bound by narrow YA codes? Or by the same irksome American puritanism that has Katniss and Peeta spend chastely so many nights together during the Hunger Games? Is Rowling a sign that this YA puritanism is not just American?

Simon vs. the Homo Sapiens Agenda is a very nice novel–not necessarily a term of praise. I do prefer stories that end well for their gay protagonists and I frankly enjoyed sharing time with adorable Simon (a word frequently used by Albertalli) than with bland Cameron. The plot, however, completely lacks the tension one is supposed to find in romance. The story, again, is very simple: Simon replies to a post on Tumblr by a gay high-school fellow, calling himself Blue, and what follows is a sincere, friendly correspondence, only mildly complicated by this boy’s reluctance to give his real name. The game the author plays with her reader is straightforward: you need to guess Blue’s real identity, which is not so difficult. In romantic comedy, typically protagonist A meets protagonist B, they start a promising relationship, and then a mistake leads A to lose B. Subsequently, A and B are gradually brought together, the mistake is cleared and eternal happiness follows. Shakespeare fixed this productive model in Much Ado about Nothing and Jane Austen polished it in Pride and Prejudice. Simon’s and Blue’s romance, however, goes through no crisis: it’s nice to see it unfold but not thrilling. As for Simon’s coming out, it also lacks a significant turning point. His blackmailer cannot really hurt him and his loving circle of friends and family is welcoming and accommodating. This might be the reason why Albertalli’s novel is popular: it’s an uncomplicated tale, what teen readers need to come out and the rest to learn tolerance. It seems, however, disingenuous, to take this simple road in view of the horrors that danforth narrates (or tries to).

At one point, Simon says that everyone should come out, including heterosexuals. I have done that a few times: whenever I start teaching a Gender Studies course, I declare explicitly what I am. This is not easy because coming out as a heterosexual should never be about clearing out any suspicion that I might be gay. If I do it, this is because I want my students to feel comfortable and speak frankly about who they are. I find that declaring yourself asexual is hardest since everyone assumes that all individuals are interested in sex. But I digress. Cameron and Simon teach us that there is a happy and an unhappy way of coming out as gay and that both need to be discussed, in fiction and in life. Hopefully, one day teens won’t have to come out at all, for there will be no closet and all persons will be free to be whatever they are.

I publish a post every Tuesday (follow @SaraMartinUAB). Comments are very welcome! Download the yearly volumes from: http://ddd.uab.cat/record/116328. My web: http://gent.uab.cat/saramartinalegre/


January 8th, 2019

This post comes in a little late, as it is customary to close the passing year with a list of the best and to begin the new one with a list of the most expected books. This is not, at any rate, what I intend to offer here, as I gave up long ago any attempt at keeping up with the overwhelming mass of literary novelties. Every December I discover horrified that I have missed all that was (apparently) worth reading the previous eleven months and, so, it is only then when I select a few titles for the bottomless list of what I’d like to read. Add to this the classics, the accidental discoveries, and the odd, neglected books that surface from reading other books. I do wonder how the readers who appear to know what is relevant every year do manage. Or is it all marketing?

I keep track of everything I read since the tender age of 14 and this is the closest I have ever come to keeping a regular diary (excepting this blog). It is always exciting to close the list for the year and go through the books read each month to recall the best moments spent in the company of intelligent minds. And it is also exciting to open a new list and wonder how it will be filled as the months to come pass (or, rather, fly!). I don’t know that this in an average measure of any use beyond my personal experience but the 2018 list throws this result: I have much enjoyed about 40% of the books I have read but, basically, put up with the mediocrity of the remaining 60%. I mean here the books I have entirely read for I don’t count the many books I have abandoned, a figure that grows every year as I get more and more impatient with writers who do not care for producing good prose (also with those who care about the prose but not the content).

I’m not sure how this works for my academic colleagues in Literary Studies but about 50% of all the books I read each year are novels; the rest may also include fiction (short stories) but are mostly non-fiction and academic essays. No poetry, shame on me. Most of the worst books I read are novels and most of the best books are non-fiction, which either means that my own personal preferences are changing as I age, or that generally speaking, novels are overvalued and non-fiction undervalued.

Thus, if you ask me to choose just one of the 90 books in my 2018 list, I cannot hesitate: every person on planet Earth should read Rachel Carson’s Silent Spring (1966), the non-fiction book that explained to the world how 1940s-1950s science had horribly polluted the whole environment with its pesticides and other venoms. I must seriously wonder what is wrong with our education since it has taken me so many years to get to this book, which I have only read because it kept surfacing in many academic works on science fiction. Why we think that reading such and such novel is more important than reading Silent Spring is a matter that we need to address urgently.

The justification used to be the artistic enjoyment supposedly found in reading novels but I find that few current novelists have either the literary skills or the intellectual equipment required to produce masterpieces, whereas the best essays (why has this been word abandoned for non-fiction??) contain both good, solid prose and admirable brainpower. Also, being myself a writer of academic work, I appreciate the hard work that often comes into writing non-fiction and in comparison to which fabulating novels seems a far less daunting task.

I have, then, much admired this past year books as diverse as David Grann’s The Lost City of Z: A Tale of Deadly Obsession in the Jungle (2010) and Judith Flanders’ The Victorian House: Domestic Life from Childbirth to Deathbed (2004). And taken off my imaginary hat before gigantic achievements such as Michel Foucault’s Discipline and Punish: The Birth of the Prison (1975) or Zygmunt Bauman’s Modernity and the Holocaust (1989), which need to be revisited now and then. I have likewise revered Ian Kershaw’s work in The Hitler Myth: Image and Reality in the Third Reich (1987) and, on the literary front, absolutely loved John Garth’s Tolkien and the Great War: The Threshold of Middle-earth (2003) and Humphrey Carpenter’s The Angry Young Men: A Literary Comedy of the 1950s (2002). Sometimes books talk to each other without the authors knowing it in the individual experience of readers and, so, I find that Pavla Miller’s short but intense Patriarchy (2017) complements very well Madeleine Albright’s Fascism: A Warning (2018)–another book I would include in our basic education together with Carson’s.

Unfortunately, I don’t read illustrated books for children–I say unfortunately because we adults stupidly miss in this way the most beautiful books published each year. My personal award for prettiest book read in 2018 goes then to the British Library’s Harry Potter: A History of Magic (2017), the companion to the recent exhibition, and a book that manages to be highly informative and a true visual pleasure. Finally, I have already enthused here about Pablo Poó’s Espabila chaval (2017), worth one hundred novels because of his impeccable understanding of what is wrong with current secondary education or, rather, with under-18 students.

How about the fiction? Well, whereas I would award the books above named an A or A+ (or 4 to 5 stars in Amazon’s and GoodReads’ parlance), the best novels I have read are, with few exceptions, B+ to A-. I find, anyway, that recommending novels is harder than recommending non-fiction/essays for whereas all readers should read Silent Spring to be informed, regardless of whether it bores them or no, with fiction boredom does play a bigger role. Thus, I can insist that you should read Albright’s Fascism but I have fewer elements to argue that you should read Sinclair Lewis’ It Can’t Happen Here (1935), the novel that best narrates what she discusses. I find Lewis’ tale very exciting but, then, you might not. Take, then, the following list as a very personal record of the fiction that has kept me turning pages, sometimes for hours.

Margaret Oliphant’s Hester (1883) is a splendid Victorian novel about a woman’s failure to pass on to the next generation the power she has acquired by accident. John Masefield’s The Box of Delights (1935) is a novel for children that many connect with Harry Potter but that is worth reading on its own, if possibly re-visiting the 1980s TV adaptation. I don’t particularly like the work of Doris Lessing but I have found much to enjoy in my second reading of The Memoirs of a Survivor (1974). I can say the same about Lucia Berlin’s short stories in A Manual for Cleaning Women (2016)–which everyone praised so highly a while ago–and young Abi Andrews’s The Word for Woman is Wilderness (2018), a mixture of fiction and non-fiction which is simply awesome. André Aciman’s Call Me by Your Name (2007) and Michael Chabon’s Moonglow (2016) are the novels I would award an A, a mark I will also award to Octavio Salazar Benítez’ Autorretrato de un macho disidente (2017), if only because it is a brave, singular book which too many readers will miss.

Forget Kevin Spacey and the American TV series, and do read Michael Dobbs’s original trilogy: House of Cards (1989), To Play the King (1992), The Final Cut (1995). If possible, see the author speaking in any of the videos available on YouTube, he’s a most interesting gentleman! So is John le Carré, who cannot do female characters well but kept me up for hours one night reading his The Secret Pilgrim (1991), a fusion of the novel and the short story collection that works very nicely. I was also thrilled by Robert Harris’ Fatherland (1992), which has so many points in common with Katherine Burdekin’s Swastika Night (1937) but is also a great thriller–and I speak as a reader who is not really into crime fiction. My one favourite author, Ian Rankin, has published this year possibly his best John Rebus novel, In a House of Lies (2018), a subtle tale suggesting that Mr. Jekyll has already overpowered Dr. Hyde. Following Rankin’s suggestion, I read Lawrence Block’s Everybody Dies (Matthew Scudder #14) (1998). Again: see the author on YouTube, what a lesson in writing!

For those of you who like SF, as I do, I must mention Stanislaw Lem’s Cyberiad: Fables for the Cybernetic Age (1965), Vandana Singh’s Ambiguity Machines and Other Stories (2018) and Richard K. Morgan’s Martian novel Thin Air (2018). I found the tales in the collective volume by women authors I Premio Ripley. Relatos de ciencia ficción y terror (2017) very good. And was totally surprised by Iraqi author Ahmed Saadawi’s Frankenstein in Baghdad (2016), a novel translated from the Arabic by Jonathan Wright which narrates the efforts of a local man to give peaceful rest to the victims of terrorist bombings by assembling a corpse out of their bodily remains. A corpse that is suddenly animated…

Do read Silent Spring. On second thoughts, do read Fascism: A Warning. It is even more urgent. And share with other readers what you love, for those books truly worth reading are too often by-passed by the list of the best. Life is too short to waste on bad books…

I publish a new post every Tuesday (for updates follow @SaraMartinUAB). Comments are very welcome! Download the yearly volumes from: http://ddd.uab.cat/record/116328. My web: http://gent.uab.cat/saramartinalegre/


December 11th, 2018

After re-reading last week William Golding’s The Lord of the Flies (1954), simply because some classics need to be revisited now and then, I got curious about whether there was a re-telling of the story with girls, rather than the all-boy cast of characters. What I found out is that there have been two recent projects, with very different outcomes, which are very useful to comment on patriarchy.

On the one hand, American film-makers Scott McGehee and David Seigel seem to have abandoned their project, presented in August 2017, to make a new film adaptation only with girls, following a deal signed with Warner Brothers. There are, by the way, two film versions of Golding’s novel, one directed in 1963 by Peter Brook, the other in 1990 by Harry Hook. A Twitter storm-in-a-teacup made it clear to McGehee and Seigel that this was a bad, unwelcome idea. A typical tweet (by @froynextdoor) read ‘uhm lord of the flies is about the replication of systemic masculine toxicity, every 9th grader knows this, u can read about it on sparknotes’. Front-line feminist Roxane Gay tweeted ‘An all women remake of Lord of the Flies makes no sense because… the plot of that book wouldn’t happen with all women’. The comments by readers following The Guardian article (https://www.theguardian.com/film/2017/aug/31/lord-of-the-flies-remake-to-star-all-girl-cast) make for very interesting reading. The discussion, as it may be expected, focuses on whether Golding depicts specifically masculinity or generally humanity, and on whether girls would behave exactly like boys. Opinions lean towards the conclusion that the novel is indeed about masculinity but girls are also capable of the same cruel behaviour. A crucial, bewildering paradox to which I’ll return in a couple of paragraphs.

The other project is a stage adaptation of Golding’s novel, presented last October by director Emma Jordan at Theatr Clwyd, Mold, later transferred to Sherman Theatre, Cardiff. A small affair (with apologies to Jordan), then, in comparison to a Hollywood production. The Guardian reviewer, Mark Fisher, generally praises Jordan’s ‘muscular and brutal production’ of Nigel Williams’ 1996 adaptation of Golding’s novel (https://www.theguardian.com/stage/2018/oct/01/lord-of-the-flies-review-theatr-clwyd). Jordan presents two novelties: the play is set in the present, not the 1950s and the cast is all-female… but the names of the boys in the novel are kept–which is confusing. This production appears to be similar to recent Shakespearean productions with all-women casts rather than a retelling with girl characters. Another reviewer, Natasha Tripney reads, nonetheless, the characters as girls: this version ‘makes sense–there are few things crueller than a schoolgirl–but the production doesn’t capitalise on this premise’ (https://www.thestage.co.uk/reviews/2018/lord-flies-review-theatr-clwyd/). She complains that the production ‘lacks tension’ but welcomes it anyway, for ‘Jordan’s female-led production makes it clear that violence, tribalism and a hunger for power are not–and have never been–the sole preserve of men’ (my italics).

First lesson: it is fine for women to experiment with texts written by men by altering the gender of the original characters BUT it is not acceptable for men to do the same, as, regardless of their intentions, it is automatically assumed that the result will be sexist. If I were McGehee, I would hire Jordan as script writer and in this way the problem of who has the right to retell Golding’s story would be solved. Now, let’s address the problem of whether the plot of Golding’s novel would or wouldn’t work with girls.

I haven’t read Golding’s most immediate referent, The Coral Island: A Tale of the Pacific Ocean (1858) by Scottish author R. M. Ballantyne. This is a Robinsonade (as the stories inspired by Defoe’s classic are called) about three stranded English boys who cope very well with the tasks of survival and in several encounters with evil Polynesian tribesmen and British pirates. Golding, it appears, decided that in his own tale, his English boys would carry evil inside and this would emerge as they gradually detach themselves from civilization and from the hope of rescue. A sort of Heart of Darkness for boys, then, but without Kurtz’ excuse of having fallen under the allure of tribal adoration and of the dreamy jungle.

Is Golding’s novel a story about masculinity? Yes and no: it is a story about how patriarchal masculinity overwhelms the positive influence, or rather lead, of non-patriarchal masculinity over the community. This is NOT a story about how all men react, but a story about how some men (Jack and his hunters), who are already patriarchal, make the most of the circumstances to impose their rule over other men with a far more rational worldview (Ralph and Piggy).

I agree with reviewers who downplay the public school background of Golding’s tale but, since this will help, let me rephrase his plot with other well-known names. Suppose that only the boy students of Rowling’s Hogwarts got stranded on a desert island (where magic does not work…). Initially, all would follow Harry Potter’s Gryffindor-inspired, sensible leadership but the moment Draco Malfoy declared that Slytherin should rule, the same split that takes place in The Lord of the Flies would follow. Both Harry and Draco are men (well, boys) but this does not mean that they have a common understanding of what masculinity is, and this is what happens with Ralph and Jack in Golding’s novel. What the author is criticizing has been usually called evil but it is actually patriarchy, even though people are now stubbornly calling it ‘toxic masculinity’, a label which is confusing, distracts attention from patriarchy and is useless to discuss women’s own hunger for power.

As soon as cocky Jack appears leading his submissive choirboys we can already see that he is trouble. When, two thirds into the novel, most of the boys have joined Jack’s tribe of hunters, Ralph asks Piggy–whose real name Golding, very cruelly, does not reveal– ‘what makes things break up like they do?’. They do not have a clear answer but I do: it’s the sense of entitlement that patriarchal men act by. This is the key to everything we call evil, a befuddling pseudo-mystical concept I totally reject. The non-patriarchal, non-toxic men like Harry Potter or Ralph are not interested in power and lack that sense of entitlement but, since they are not as violent, they tend to fight a losing battle. If the providential officers had not appeared in the nick of time to rescue the boys, Ralph would have been hunted down and impaled, as Jack intends (remember the stick with two points that his lieutenant Roger makes?). Harry is almost destroyed by the mission Dumbledore gives him to cancel out Voldemort’s genocidal sense of patriarchal entitlement, but–and we must admire Rowling for that–he does so on his own terms, using intelligence rather than murderous violence.

So, can we have The Lord of the Flies with an all-female cast? Of course we can! Girls would be split in exactly the same way as the boys in the novel, BUT not because girls are essentially cruel or because they behave like boys. It’s because everyone, of any gender or genderless description, feels the pull of patriarchy and its promise to reward a personal sense of entitlement to power. So far, patriarchy has pushed women out of the rat race to accrue power, but the more conquests feminism makes, the more women we see acting out their own lust for power, and not at all to help other women.

I have recently heard Michael Dobbs, the author of the original House of Cards novels and Margaret Thatcher’s Chief of Staff (1975-1987) praise her thus: ‘But it was that drive and that anger, that determination, that obsessiveness that drove her on to achieve things which most of her people could not’. She stood out among other women and among other individuals of her low middle-class background but only to claim power for herself, not to do any good to others like her. I can easily see a girl named Maggie play the part of Jack in a female retelling of Lord of the Flies, and a girl called Katniss resisting her.

The confusion springs, then, from this idiotic, harmful, essentialist supposition that all men behave in one way and all women in another, which does not take into account the OBVIOUS intra-gender divisions. If anti-patriarchal men like Ralph were not constantly opposing patriarchal men like Jack, we would still be living in prehistoric times and women would be much, much, much worse off than they are now. It is, then, both silly and extremely dangerous to go on speaking in essentialist terms of men and women when, actually, human beings are divided along power lines.

Patriarchal individuals, whether men or women (or genderfluid), endorse the idea that society is a hierarchy determined by the degree of power each person enjoys (or lacks). Non-patriarchal individuals, whether men or women (or genderfluid) are not being motivated by a hunger for power, and so they (we!) prefer communal circles to hierarchical pyramids. This looks very much like the political division between right and left, but let’s not be naïve: many individuals in the left also seek power (remember Stalin?). I’m talking about something that transcends political divisions even though politics depends very much on it: the allure of power (for domination).

Golding published Lord of the Flies in 1954, at the end of the first decade in the Cold War. His boys are evacuees from some unnamed British colonial outpost, which they must leave following the explosion of a nuclear bomb in a war never mentioned, nor explained. The author had then a very good reason to abandon the optimistic Victorian view of Christian gentlemanliness in Coral Island and replace it with a Conradian pessimism. His novel is supposed to link tribal primitivism with modern barbarian so-called civilization and it is clear to me that the target of his attack were the patriarchal men like Jack or like the makers of the bomb, not the good guys like Ralph. What is very, very sad in Golding’s work is that it came out the same year as Tolkien’s final instalment in The Lord of the Rings, The Return of the King. Why is it sad? Because, though profoundly damaged, Frodo manages to defeat Sauron with the help of his loyal Samwise and other friends in the Fellowship of the Ring. Instead, Ralph loses Piggy and has no chance at all of becoming the hero that will stop the villain Jack. He is radically alone, as Frodo never is–this is what is sad.

The lesson to learn, then, from Golding’s Lord of the Flies is how to protect ourselves from patriarchal fascists like Jack (or his imaginary female counterpart Maggie) by listening to the voice of reason. Like Piggy, who embodies it in the novel, this is a voice constantly bullied and denied–even by the supposedly sensible persons. Piggy begs Ralph not to tell the others that he is known by that body-shaming, awful nickname but he non-chalantly lets it be known, thus paving with this act the way for Piggy’s final murder. I do not mean that Ralph wants Piggy dead but that failing to protect reason leads to appalling consequences for all.

A last word: dystopias like Lord of the Flies are born of despair but make us cynical, which is why their current proliferation is so dangerous. If you want to redraw Golding’s tale changing gender lines, make the community of children varied (including boys and girls, hetero and LGTBI+). Tell how Jack and Maggie try but fail to establish heteronormative racist tribal patriarchy, and then have Ralph and Katniss and Hermione (in Piggy’s role), choose their colour, organize the whole community to resist their rule. If this works, Jack and Maggie end up isolated in a corner of the island, where, with some luck, they kill each other in a fight to determine who is more powerful; the rest build a democratic community based on mutual respect and tolerance. This works so well that when their adult rescuers appear it, they join it.

See how easy it is to think of a utopia that works? What, you find it sentimental? Well, some feeling would be welcome in our age of narcissistic unfeeling and hypocritical dystopian pessimism. And fight patriarchy not masculinity!

I publish a new post every Tuesday (for updates follow @SaraMartinUAB). Comments are very welcome! Download the yearly volumes from: http://ddd.uab.cat/record/116328. My web: http://gent.uab.cat/saramartinalegre/


December 4th, 2018

As part of preparing for my Winter-Spring course on Romanticism, I have been reading Duncan Wu’s incisive 30 Great Myths about the Romantics (Wiley Blackwell, 2015). I’m inwardly smiling at how little the world may care for a crisis involving a middle-aged woman teacher suddenly discovering that she has to unlearn everything she thought she knew about Romanticism. But, well, this is the crisis I’m going through. I feel blessed and fortunate to be sharing it with my co-teachers, David Owen and Carme Font, who have been in charge of the course for several years. This crisis is already resulting in very fruitful discussion with them, and I am certainly benefitting from their experience and insights: David specializes in Austen, Carme is an expert on women writers of the 18th century, so you see what great company I keep!

I do not intend to comment here on all the thirty myths–a kind word for lies–that Wu destroys with his razor-sharp scholarship. Some are ideas which every self-respecting feminist has been battling for years (myth 25: ‘Percy Bysshe Shelley wrote Frankenstein’); others are a matter of common sense, for it is obvious that myth 5, ‘the Romantic poets were misunderstood, solitary geniuses’, is nonsense. Almost as barefaced as myth 6, ‘Romantic poems were produced by spontaneous inspiration’. Funnily, the myths about Byron are the ones I cannot stop thinking of, mostly because Wu is quite brutal with poor George Gordon. I accept with no problem, except Wu’s barely concealed homophobia, that Byron was a fat queen who preferred 15-year-old boys to women. Yet the demolition job applied to myth 19, ‘Byron was a “noble warrior” who died fighting for Greek freedom’, ends with a truly pathetic image: that of the poet dying in Greece not in the battlefield but at home, bled to death by incompetent physicians treating him for a fever caught from a tic in his dirty pet Newfoundland, Lyon. This is indeed the complete antithesis of Romanticism!

I must say that myth 14, ‘Jane Austen had an incestuous relationship with her sister’–Cassandra and the author shared a bed for 25 years, it seems–though improbably lurid made me reconsider again a nagging suspicion: Austen may have been a lesbian mocking the heterosexual women of her class, desperately seeking enslavement by the gentlemen of 1810s. An idea to consider when I teach Pride and Prejudice… with much care, for this is what Wu is attacking: using speculation and misinformation as the basis of scholarship. One thing is inviting students to consider ‘what if…?’ Jane Austen had been a lesbian, and quite a different matter is accepting with no proof that this was her sexual identity and, hence, this is how we should read her books. If you find this second option preposterous (which it is!) then you’ll be as surprised as I have been to discover that most assumptions about Romanticism are of that kind: empty bubbles very easy to puncture if only the right bibliography is read. For that is Wu’s main message–if scholars worried to check their sources, the myths would not be perpetuated. An extremely important point to make in the age of fake news.

I’ll quote two passages from Wu’s ‘Introduction’ that call for a profound reflection. ‘What we call Romantic’, Wu observes, ‘might more accurately be called Regency Wartime Literature were we to backdate the Regency, as some historians do, to 1788’ (xiv). Anyone who has studied the early 19th century knows that, properly speaking, it begins in 1789 with the French Revolution and includes the Napoleonic Wars (1803-1815). I read a while back the twenty-two volumes by Patrick O’Brien narrating the adventures of Captain Aubrey and Doctor Maturin at sea during those wars, but even so I still find it problematic to connect Romanticism with war.

The problem also affects our understanding of Modernism (roughly 1910-1939) for similar reasons: the name attached to a particular movement is used for a historical period, thus breaking the neat monarch-based chronology of English Literature. ‘Victorian Literature’ (1837-1901) should be preceded indeed by ‘Regency (Wartime) Literature’ but, then, it is also followed by a mess of labels in the early 20th century which contemplate Edwardian and Georgian as periods but then get lost into Modernism and Post-Modernism (rather than the Second Elizabethan Age!). The point not to forget, however, is that Romanticism belongs in the Regency Period and that this was beset by revolution and war, as was Modernism (WWI, 1914-18; Irish uprising, 1916; Russian Revolution, 1917).

The second passage: ‘The point is that the contemporary perspective was different from our own. Today Jane Austen is one of the most popular novelists of all time but in 1814 no one thought she would occupy that status, nor did they suspect an obscure engraver named Blake would 150 years later be hailed as a literary and artistic genius’ (xv-xvi). The writers that Wu names as popular, best-selling names in Regency Wartime Literature (let’s start using the label) are not at all part of the canon that has survived, in which mostly unknown names with some exceptions (Byron, Scott) shine. I suspect that Wu cheats a little when he claims that ‘The current popularity of Mary Shelley’s Frankenstein and Hogg’s Confessions of a Justified Sinner would have been unimaginable to the scattered few who heard of them when they first appeared’ (xvi, my italics), for I believe that their fame soon grew (or am I perpetuating a myth?). Yet the point he makes is equally relevant. What survives from the past is a haphazard selection no person then living could foresee. If we could bring back a handful of common readers from the early 19th century they would be as amused (or dismayed) by our preferences as we’re certain to be should we return from death in the 23rd century. What great fun it is to guess who will survive!! I wonder that gambling houses are not already offering the chance to bet, for the benefit of our descendants…

Why do the myths persist? Wu replies that ‘The limpet-like persistence of some myths may be related to the illusion they draw the Romantics closer to us’ (xviii) but I’m not quite convinced. It might even be the other way round: Wu’s presentation of Byron as a flamboyant homosexual feels somehow more relatable than his reputation as a heterosexual Don Juan; likewise, his middle-class Keats, the well-educated Medicine student, makes more sense than the working-class apprentice apothecary killed off by a review. Wu, then, is the one approaching the Romantics to our time while debunking old and new myths (lesbian Austen!). Rather, what seems to be happening is that since the instability of the label ‘Romantic’ makes it impossible to understand what Romanticism truly was, we clutch at the myths, even knowing they’re lies. At least they form a coherent body of knowledge, fossilized into respectability first by the Victorian critics and scholars, and later by all the rest until our days. The myths, in short, are convenient and, as we know both as students and teachers, they’re also a convenient way to keep undergrads interested as they swallow with immense difficulties the poetry and the novels (we don’t even touch the Romantic plays).

Wu is at his most sarcastic when he highlights the ‘nuttiness of the thesis’ defended among others by John Lauritsen, according to which Percy Shelley wrote Frankenstein. Why? Because any scholar who bothered to check the two volumes of Mary Wollstonecraft Shelley, The Frankenstein Notebooks: A Facsimile Edition of Mary Shelley’s Manuscript Novel, 1816–17, edited by Charles E. Robinson (1996) could see that a) Percy contributed little and b) of no interest. Wu is specially annoyed because most of the textual evidence required not to blunder and perpetuate myths is easily accessible online. The point that he is making is transparent: all our knowledge of English Literature, beyond Romanticism, relies on bad scholarship; even worse, despite the efforts made in recent decades to correct the most glaring mistakes/lies/myths, they are still being perpetuated because nobody really cares about the truth. You may be thinking, ‘well, I prefer my Byron thin, handsome, and a woman-eater’ but apply lazy scholarship to other fields and we might get ‘Stalin was never as big a genocidal tyrant as Hitler’, a myth we should question. For, you see?, if the History of Literature is based on almost indestructible myths, surely this also applies to History, only too easy to sum up as a pack of lies. Not what you want to do in Trump’s era.

How should we, then, teach Romanticism? There is no introduction yet that follows faithfully Wu’s volume, which means that we’re bound to teach still a myth-based version of Romanticism (a mythical version?!). I see little sense in teaching the myth and the truth together to students who know nothing about Romanticism, yet I don’t feel ready to incorporate fat queen Byron into my teaching–I might be starting another myth, for all I know. Then, as Google tells me, with two exceptions in minor colleges, everyone still uses the label ‘Romantic Literature’ rather than ‘Regency (Wartime) Literature’, though I’d be happy to re-name our course at UAB. What Wu has produced, then, is a sort of intaglio effect in cameo carving, by which you see the figure as concave or convex, depending on the light. I have reached the point when the effect is visible but, to be honest, I don’t know how to proceed.

Well, I do know: hard study. I doubt, however, that I have before February the time it will take to undo 30 years of knowing the Romantic in the standard, clichéd way. And this is how myths survive: by acquiring partial, biased knowledge we are later too pressed for time–or too plain lazy!–to undo.

(PS: Now go and check myth 26, ‘Women writers were an exploited underclass–unknown, unloved, and unpaid’)

I publish a new post every Tuesday (for updates follow @SaraMartinUAB). Comments are very welcome! Download the yearly volumes from: http://ddd.uab.cat/record/116328. My web: http://gent.uab.cat/saramartinalegre/


November 27th, 2018

Last week I gave a lecture in Bilbao within a cycle devoted to publicising women’s work as scientists. My lecture was called “Women Scientists that Tell Stories: New Humanist SF Written by Women” which sounds worse in English than it does in Spanish (“Científicas que narran historias: Nueva ciencia ficción humanista escrita por mujeres”). You can see it on YouTube (https://youtu.be/fZTZqG0lI-k), and I hope you enjoy it!

Then two days later, I gave another talk, this time for the SF Catalan convention, or CatCon 2, on robosexuality as an emerging identity in real life and also about its representation in fiction (with a focus on ‘male’ robots). In the Bilbao lecture I spoke about Vandana Singh, Nieves Delgado and Carme Torras, whereas in the CatCon lecture I spoke again about Delgado and another woman author, Montserrat Segura, but also about a man: Isaac Asimov.

The strategies are, as you can see, quite different: a) publicising women’s work, b) discussing a topic in relation to both women’s and men’s writing. This has set me thinking hard about which of these two strategies is better and I must declare that I cannot solve this riddle: I prefer mixing authors in the discussion of a specific topic but I realise that we still need to make women much more visible. I wonder, however, why it is taking so long and whether we have collectively taken, as feminists, the right path. I’m afraid we have not.

I have been pondering this matter for a long time (you may check, for instance, “Hacia una nueva utopía en los Estudios de Género: El ‘problema’ del feminismo (en la ciencia ficción)”, http://ddd.uab.cat/record/176095) but still feel stuck in the same dilemma. As a feminist woman, I feel that I do women writers a disservice by asking for an end to the separate study of their work. And so, for the same reason, because I’m a feminist woman, I take up all the chances that come my way to explain why women should be better valued and discussed separately to increase their visibility. I do not particularly enjoy discussing feminism and femininity so often but if you’re a woman this is what you’re invited to do. I recently heard SF author Becky Chambers say that she’s happy discussing gender but she’d rather discuss spaceships and I sympathise, particularly because men are hardly ever invited to discuss gender and often monopolize the public discourse on spaceships (if you know what I mean).

Although the feminist approach to studying women’s writing had been used long before, among others by Virginia Woolf, for convenience’s sake I’ll date the academic project of tracing back the presence of female authors in the History of Literature to Elaine Showalter’s A Literature of Their Own: British Women Novelists from Brontë to Lessing (1978). That project is already forty years old, then, with all the controversies it has generated but also with all the colossal tasks so far carried out.

We have now a variety of resources cataloguing practically all the writing women have produced from the dawn of times, perhaps only missing sixth- or seventh-tier authors. The effort to make their works available continues and will continue for decades. Let me suppose, again for convenience’s sake, that it might take forty more years to fulfil the feminist utopia of bringing all neglected women back from the sexist past and into the limelight of a post-patriarchal future. Then what? Do we still continue writing monographs with the words ‘women’ or ‘female’ in the title? Or do we stop and start full integration?

I complained, perhaps too loudly, in the question time following a lecture on 18th century women’s writing that the problem with the separatist strategy is that a) we don’t have titles that refer specifically to men’s writing, b) feminism has failed in its attempt to make the study of women’s writing compulsory for male researchers. We may continue publishing volumes called Women’s Poetry of the 18th Century, for instance, but this is self-defeating, for we don’t have the equivalent Men’s Poetry of the 18th Century. Instead, a book called Poetry of the 18th Century written by a man is likely to be mostly about the male poets, though academic fashion, political correctness and perhaps the work of a female editor might result in the still token presence of a handful of women.

We should rewrite all the textbooks, then, for this is where the foundation for real change lies, and not only in the separatist line of feminist research. I must acknowledge, though, that when integration is fully achieved in an introduction, this produces a funny feeling. Perhaps a specialist in neuro-science should explain this to me but it seems that once you pass the early stage as an undergrad when you learn the basics of the literary canon it is really hard to change your own vision.

Of course, this is enlarged as you learn more names and read more women writers. Yet, if you’re asked about the main authors of a given period, your reply is likely to result in a string of male names. You need to stop and think, ‘oh, yes, and then there were all those women’. The names of Austen, the Brontës, (George) Eliot or Virginia Woolf do come to mind because they have been canonical for a long time. But, it is still easier to recall Anthony Trollope than Fanny Trollope, or Wilkie Collins rather than Margaret Oliphant. See what I mean? This is why, I insist, integration must happen at textbook level. The way I see it, that should be the focus of the feminist project.

Or, perhaps, I have all along misunderstood what academic feminism is about and integration is not at all its end. Yesterday I was interviewed by a fourth-year student of journalism and she told me that many young women involved in the current feminist movement in Spain do use feminism in the radical sense of reinforcing women’s superiority over men. As I explained to her, that is precisely the reason why I tend to call myself these days anti-patriarchal rather than feminist (my feminism aims at achieving equality, not exchanging one type of inequality for another). But I digress. There is then the likelihood that part of the women involved in feminist academia are actively working in favour of gender separatism–and, yes, I’m sounding this silly and naïve on purpose, to make this choice sound the more suspect.

And, then, we have the men in Literary Studies, some truly pro-feminist and anti-patriarchal, some rabidly misogynistic and the rest carefully navigating the waters of, as mentioned, political correctness. One strategy is the one followed by Peter Boxall in a recent lecture I attended and in which he managed not to discuss identity at all, as if that was not necessary. He is one of the researchers constantly producing surveys and introductions which is why I was so aghast at the neutral tone of his lecture (which dealt with men and women writers, that’s a comfort at least).

I simply don’t see men like Boxall, or similar academic male luminaries, facing the issue of how to write specifically about men writers, for they needn’t do that. We, women, being still subordinated in patriarchal society must consider how/why we write but men can still afford the luxury of not looking into their own masculinity and how they’re positioned in relation to patriarchy, and I mean both writers and academics. I feel deeply annoyed right now thinking of this… Women like me, interested in dismantling patriarchy, are the ones, then, writing about male writers as men, which is, if you think about it, quite strange since we lack the experience of being men.

I wish we lived in post-patriarchal, post-gender times and could get over the onerous task of having to take positions that are so hard to defend. Then we could talk about spaceships–though dear Becky Chambers forgets that this is also a heavily-gendered issue. Every time I see a phallic rocket taking off, I wonder what dictates the shape: pure physics or gender issues? In contrast, in Octavia Butler’s trilogy Lilith’s Brood, or Xenogenesis Trilogy, the alien Oankali spaceship is an organic, fully sentient being, which often feels as a gigantic womb. You see where I’m going with this…

I cannot write here ‘in conclusion’, for I don’t know that I have reached any conclusion. I’ll continue accepting invitations to discuss feminism and women’s writing, as I work on gender integration as a teacher and a researcher. As a feminist, then, I’ll antagonize both my radical women feminist colleagues and also the recalcitrant patriarchs who think, for whatever reasons, that being a feminist entitles you to receiving constant support from the Government (does it??). I’m already working on a book about men’s writing within the context of patriarchy, so I cannot say that will be my next step. If anyone’s listening, please write inclusive introductions for, I’m fully convinced, that’s the only way to change the way we learn the canon.

And if you’re a woman fully committed to working only on women’s writing and for a female audience, well, I’m happy if you’re happy but do consider how/why male researchers can still afford to ignore your work, and simply not discuss identity in its most basic sense.

I publish a new post every Tuesday (for updates follow @SaraMartinUAB). Comments are very welcome! Download the yearly volumes from: http://ddd.uab.cat/record/116328. My web: http://gent.uab.cat/saramartinalegre/


November 22nd, 2018

I’ll begin today with a semantic quibble about the presence of the word ‘Bachelor’ in the name of the degree ‘Bachelor of Arts’ or BA.

Pop etymology indicates that the Medieval Latin word ‘baccalaureatus’ derives from Latin ‘baccalaureus’, a portmanteau of ‘bacca’ (berry) and ‘laurea’ (‘laurel’), because of the laurel crown awarded to graduates as if they were Roman victors. In Spanish this eventually gave ‘bachiller’, which refers to the man with a secondary education; ‘bachillera’ was used mockingly, since women were not educated to this level until the turn of the 19th century into the 20th. The word ‘bachillerato’, still used for the two-year course after E.S.O. and before university has, then, that peculiar origin. For higher education, Spanish preferred ‘licenciado’, that is to say, the person who has a license to teach to others what he has mastered (note my sexist choice of pronoun), usually in a five-year course. Now we have ‘graduado’ in imitation of English ‘graduate’. ‘Bachelor’ appears in English as an import from French meaning a young man in training, whether this is in arms or in academic knowledge, hence the eventual use of the word for the degree. Also for the man who remained single for life, as, I assume, that was the case for many minor knights and scholars too poor to marry (besides, bachelors eventually took orders, or already belonged to them). So, ladies, think how funny it is that you claim to have a Bachelor of Arts degree.

This prologue is just the opening salvo for what I want to discuss to day: what is the point of a BA in the Humanities, and specially in English Studies? Please, note that I mean the Spanish-style BA combining Language and Literature in a four-year course, not English in the Anglo-American sense of the study of the literary arts, though my argument also applies in many ways. My post today is specifically a very personal response to the assessment the degree I work for has gone through. We have passed it though not with flying colours because it seems we have shortcomings to solve in three areas, or, rather, types of skills: employability, teamwork and digital skills.

To understand what we’re going through now, I need to mention that universities are Medieval institutions that have survived the vagaries of time because they are very slowly to change. In recent years, meaning within the timespan of my own personal memory, this change has been accelerated with very questionable results. I am constantly narrating here how as researchers we are constantly on the verge of burnout but hardly given any psychological support, much less reward. I won’t go again through the tragedy of the chronically exploited younger staff. Rather, the focus is why we have degrees at all.

The old focus was that degrees exist to enhance the territory of knowledge, and, so, ‘Filología Inglesa’ first saw the light in 1952 in the Universidad de Salamanca because it was such a shame that English language and Literature were so woefully unknown in Spanish scholarly circles. The initial reason why ‘licenciaturas’ were established, then, was self-centred in the sense that the presence of the student body justified the tenure of the staff, so that they could generate knowledge mainly for scholarly use. The students attended university to benefit from, so to speak, the fallout of academic life and perhaps enter it themselves. Students who did not pursue an academic career (95%) were supposed to get an education, not necessarily professional training. The education was supposed to give them general credentials to find a job beyond the specific knowledge they had earned. A ‘licenciatura’ in ‘Filosofía y Letras’ meant that you were competent, intelligent and capable of further learning.

The current model–established in 2009 after an intermediate period in which ‘licenciaturas’ were reduced to four years rather than five and before MA degrees were established in Spain–is radically different. Now universities need to justify their very existence depending on what they contribute to society via results, usually connected with the employability of students. Let me give you an example. Suppose you have, as we do, a German language and Literature unit, which contributes to our BA degree and to others in the Facultat. As long as student demand of German reaches a minimum, this section survives. If, as happened in Universitat Rovira i Virgili years ago, the demand dwindles dramatically, then the section is closed, regardless of the research it contributes. There is usually a time of transition during which the State will wait for the tenured teachers to retire and will hire no more staff (or only associates that can be dismissed). But, yes, whole segments of knowledge can be lost in this way, and I’m not talking about obsolete science.

In this market-oriented new model, then, teaching matters more than research when deciding which Departments you keep alive and, what is more, even though universities are formally research centres, the cost of keeping certain units open is calculated on teaching-related statistics. Now, here’s the problem: we know that we’re giving our students an education but we do not know what it is for. Furthermore, if you think about it, BA degrees should not worry about employability because they exist as a bridge between secondary education and the advanced education provided by MA degrees and doctoral programmes. Technically, then, the burden of employability should fall on the MAs, which is not an exaggeration considering that old ‘licenciaturas’ were five-years long, thus the sum total of UK-styles BA and MA programmes (3+2 courses).

Employability is a very tricky question for a BA degree in English Studies: 75% of our students will end up being secondary-school teachers, whether they have a vocation or not, but 25% are open to other possibilities (jobs in management or in professions connected with publishing, translating, writing and so on). We cannot formally train our students to be teachers, for this task corresponds to the School of Education (though, paradoxically, they train mainly primary school teachers). So, we proceed on the basis that whatever our students learn will be later applied to their future profession through some intermediate stage, whether this is a formal MA or direct work experience.

As a Literature teacher, then, I train my students in skills that are 100% of direct scholarly application, should they decide to pursue an academic career, but that are supposed to be also of general applicability in any professional occupation requiring intellectual abilities (reading and interpreting texts, seeking sources, giving presentations, writing reports, and so on). I use a mixture of the traditional and the new model. I cannot, however, organize my teaching around the idea that I’m training students for professions they don’t even know they will have. As for teacher-training, well, I wasn’t trained myself: I made a good note of what my teachers did and then copied what I think worked best. Other than presenting myself as a model to follow or not, I don’t know how to train future teachers, thinking besides that they might teach secondary school, which I have never taught, and against a mid-21st century background with God knows what kind of classroom technology (and students!).

Teamwork is an obsession with current regulators of educational rules that in practice all students hate. This is why they don’t like participating in class discussion, which is our basic, most uniformly used type of teamwork. I keep on telling my students that classroom work is collaboration and that I’m not there to lecture (only sometimes) but to guide them in collective discussion–if only for the sake of practising English. They do know that a class is a team which must work together but this is resisted every day in class. If I ask my students to work in pairs or in small groups of up to four and then walk around and talk to each little group that works well (though our classroom space is hardly designed for that). Ask them, however, to work in teams on a project and you have that typical situation: out of, say, five students, two do nothing, two do a little and one does everything, which ends up benefitting the lazy ones. Perhaps that is realistic training for actual job-related situations but students tend to see teamwork as frustrating (at least in this little corner of the university where I work). This is why I have tried other kinds of teamwork: producing collective volumes as e-books (available from the digital repository). The problem, I’m told, is that this is not visible in the official syllabus. Well, it is not because I’m still experimenting (this year, for instance, I’m thinking of applying project-oriented teaching to second year teaching, rather than third and fourth).

Digital skills–here I feel like screaming…!!! Teachers born in the 1960s and before should be learning digital skills from the digital natives in their classroom and not the other way round. We have self-trained at each point since the internet first reached Spain (in 1996) to use e-mail, online catalogues and databases, blogs, websites and the social networks. I don’t understand, then, why we should be made responsible for the digital training of our students–persons who often sit in class compulsively checking their cellphones rather than listening to us. Just let me explain that I do want to have my students collaborate in a booktube channel and produce basic documentaries to accompany papers or dissertations. However, when I asked my university for help to learn the required skills, they basically told me that they lack the budget and the facilities. I asked next the student delegation to find me a student with advanced audiovisual know-how who could train me and other students, supposing that we must have some vloggers in our classrooms. So far, no luck. I contacted then a professional company but they asked for 1000 euros which with our ridiculous yearly budgets is an impossible quantity (we get now one fourth of the money I could use back in 2005-8 as Head of Department and that was already very little).

I am, in short, plain angry to be constantly judged, as a teacher and as a researcher, by standards that can never be met because they are fundamentally elusive. Also the other way round: I have the suspicion that the standards chosen are elusive so that we can never be up to task. It’s this constant feeling that you’re working hard to run a 100-metre race and when you get to the starting line than you’re told that actually you must also compete in other events for which you didn’t know you had to train. If you manage you get some inkling, by the next time you’re assessed rules have, anyway, changed again.

The market, in short, wants to invest as little as possible in educating citizens, preferring instead to train workers that must have skills universally employable so that they can be moved around from one badly-paid job to the next. The market wants, in addition, to have us, university teachers, assume the burden of passing on skills for which we have not been trained, while at the same time it undermines the respectability of the academic skills we do possess. I often feel that the message I’m being sent is that, as a Literature teacher, I am a useless luxury and, as such, society would be better off without me. And I’m not speaking here of myself personally but of all Literature teachers in the world.

I must, then, justify how what I teach trains the university’s clients (are they still students?) for employability, team work and the use of digital technologies. Well, I have a double answer to that: a) obviously and b) not at all, depending on whether you are willing to value what we, Literature teachers do, or not. We can always improve our teaching in relation to our own subject needs but we cannot turn critical scholarly work on William Shakespeare into skills generally needed for current jobs. It is the employers’ responsibility to train employees, not ours, for we’re educators–and that’s a different set of skills. Don’t make us, then, shoulder a burden which belongs to the market, not to the university.

I publish a new post every Tuesday (for updates follow @SaraMartinUAB). Comments are very welcome! Download the yearly volumes from: http://ddd.uab.cat/record/116328. My web: http://gent.uab.cat/saramartinalegre/