Quantcast
Channel: Chi Luu – JSTOR Daily
Viewing all 29 articles
Browse latest View live

Synesthetic Adjectives Will Make You Eat Your Words

$
0
0

Sweet, sharp, tangy, tart: language is delicious! So in a week that’ll surely end in a rash of Thanksgiving-related food comas for many of us (if not tears and recriminations), it’s probably about time to give thanks to the practically edible words that make food tasty (so sweet and so cold). Oddly enough, sometimes those very same words that give a bit of nuanced umami to our sense of taste can also be used for some of our other senses, creating a poignant, shared language. Warm, bright, crisp, soft… these are some of the words that could each describe aspects of smell, sound, sight or touch, as well as taste, all without batting an eyelid. We can speak of warm hands (touch), warm lights (sight), warm voices (sound) out in the crisp autumn air (touch, smell), like a crisp wine (taste), just to give a crisp answer (sound).

So we can use the same words to describe different senses—why would this be weird or noteworthy? After all, many words can acquire new meanings and nuances over time, especially through metaphorical use. Obviously we mostly have adjectives to thank for this, but it turns out not all adjectives are created equal. Some are more footloose and fancy-free than others and can be found lingering and malingering in more linguistic contexts than you might expect.

Chi Luu

Chi Luu is a peripatetic linguist who speaks Australian English and studies dead languages. Every two weeks, she’ll uncover curious stories about language from around the globe for Lingua Obscura.

According to Bodo Winter, some adjectives such as “large” or “beige” seem fairly neutral, cognitively speaking, while others, such as pungent,” “fragrant” or “delicious” carry a much stronger emotional sense. Recent research, for example, has found that tempting and attractive food words actually trigger cognitive simulations of eating that food far more than more neutral words about food.

Winter points out that adjectives that originally pertain to taste and smell are “emotionally flexible” and occur in more contexts than more neutral adjectives (such as the aforementioned poor, unloved beige). Researchers have found that taste is closely and cognitively tied to the human reward system and shares brain space with how we process emotions. Likewise, there’s a very strong connection between our memories of smells, odors, fragrances, and our feeling for nostalgia. Turns out the musty past can be a pretty smelly place. So the sense words we choose to use in certain linguistic contexts can trigger certain neural responses. Meanwhile “beige smells” or “beige sounds” might be possible in poetry (you never can tell with poets), but out of context we don’t have a strong, consistent concept of what that can mean.

Joseph M. Williams describes this class of emotionally flexible adjectives as “synesthetic” and this is where this gets interesting. Synesthesia is a relatively rare cognitive phenomenon where certain kinds of stimuli might trigger (usually) involuntary sensations that are more commonly experienced through another sense or modality. For example certain words or sounds might trigger certain food tastes in the mouth, or induce a feeling of being touched. Does that sound (or taste) a little mixed up? In fact, synesthetic abilities give some language savants an edge, while other synesthetes are able to develop their unique sense of creativity thanks to this richer neurological access to multiple senses in parallel.

If you’re the tiniest bit jealous of a synesthete’s heightened sensibilities and multi-layered understanding of the world, worry not! To a certain extent, perhaps without realizing it, by using synesthetic adjectives (or metaphors) we’re all experiencing a kind of synesthetic behavior that is doing something in our brains. It’s unremarkable to us to describe taste as being sweet, because sweetness is ordinarily a property of taste, but how do we conceptualize touch as being sweet or a voice or music as sounding sweet? What about a sweet smell or fragrance? Yet we’re all able to understand and have a feeling about what “sweet” could mean in those different linguistic contexts, all while linking back to the original primary meaning.

What’s more, William’s study shows that there’s a consistency to how languages change the semantics of synesthetic words for use with different senses. For example, “cloying,” a word we might use to describe the strange seasonal concoction of sweet potatoes mashed with melted marshmallow (which, as a non-American, I was always given to understand is a required traditional Thanksgiving dish going way back to Pilgrim times), actually started out as a word describing something to do with touch, meaning to choke or to clog. It semantically shifted to describe a kind of taste (overly sweet) before taking on a more general figurative meaning that could apply to sound (his cloying voice). Similarly “bitter” originally had a more physical tactile meaning, related to “bite,” meaning “sharp” or “cutting.” Now, of course, we speak more easily of “bitter foods” and “bitter arguments.”

While touch adjectives might transfer to taste or sound, taste adjectives don’t tend to transfer to touch, but might sometimes change to smell.

These semantic shifts are often initially prompted by metaphorical uses of words at first, but this is not a case of metaphor haphazardly wreaking havoc on meaning. It’s interesting to see not only how the semantics of these synesthetic adjectives change, but what doesn’t and seemingly can’t change. Can we ever meaningfully speak of “loud heights” or “salty hugs“? According to Williams, while touch adjectives might transfer to taste or sound, taste adjectives don’t tend to transfer to touch, but might sometimes change to smell (sour smells) and sound (dulcet music). Williams offers evidence to suggest this kind of semantic shift might be generally consistent across other languages as well, such as in Japanese. A word like “atatakai” (warm) can shift from touch to color to sound.

Interestingly, for olfactory adjectives, Williams finds that there are not many that start out primarily as smell words that transfer in meaning at all. Why would this be? In fact, despite the importance of our sense of smell, which is so closely linked to our emotional sense of nostalgia and memory, it appears we don’t have the words at all—it seems we’re really bad at naming smells. We tend to use words from other senses or more often words that remind us of other smells (e.g. it smells smoky/citrus-y/like a rose/like a wet dog) and don’t have many words that deal with properties of smells. Compared with the rich vocabulary we have for tactile, gustatory, auditory, and visual senses, the dearth of smelly adjectives seems rather odd, doesn’t it?

It’s not clear whether this is an interesting fact about English or a more universal trend. For some researchers, the difficulty in naming scents and smells, at least in English, has to do with the rudimentary olfactory information which is being passed onto lexical centers, which is relatively less nuanced than auditory or visual signals. Reusing the properties of other senses might help us reach an understanding of how to process smells in a less crude way than “good” or “bad” smells. Conversely, there’s evidence from other languages that suggests some other cultures, such as in Maniq, may experience the world through the lens of more nuanced olfactory senses than we do in English, and have the robust lexical toolbox to describe it, in the same way we can consistently define gradations of other senses, such as touch and taste.

Clearly, there’s more work to be done to uncover the mysteries behind our spidey senses and how we talk about them. The language we use to describe our senses is not just a matter of finding the right words. It also reveals how we tend to understand and receive the world around us, primarily through some senses and perhaps not others. Synesthetic adjectives show how we connect and translate what we experience in the world, through delicious, fragrant, brilliant, vivid, poignant semantic change, into a richer shared language for all our senses.

The post Synesthetic Adjectives Will Make You Eat Your Words appeared first on JSTOR Daily.


The Cozy Linguistics of Hygge and Other “Untranslatable” Words

$
0
0

Wintry days are finally here, and those of us who enjoy this season are no doubt huddling in the cold, rugged up in our winter coats and warm woolen scarves, dreaming of crackling fireplaces, good company, coziness, and generally getting hygge with it.

Yes, hygge. You know, that well-known word that’s so in right now that it’s been named one of 2016’s words of the year by both the Oxford Dictionary and Collins Dictionary(‘s marketing departments). It may be unremarkable to Danes, but hygge (vaguely pronounced hoo-guh) and its adjective form hyggelig are exporting the Danish art of coziness to the world. Interestingly, English-speakers have enthusiastically embraced it during a rather bleak and hollow year.

Chi Luu

Chi Luu is a peripatetic linguist who speaks Australian English and studies dead languages. Every two weeks, she’ll uncover curious stories about language from around the globe for Lingua Obscura.

This one word alone has apparently been responsible for at least ten books this year alone, actually published, to patiently explain the minutiae of Scandinavian cultural concepts of conviviality, cosiness and comfort to eager adopters (and only one of those is a parody). It’s often breathlessly described by trendsetters as an “untranslatable” Danish word—that the Danes themselves actually borrowed from the Norwegians in the eighteenth century—which meant “wellbeing.” But what is the curious story that this loanword can tell about us, our language, and how we see the world? Is there something lacking about Anglo-centric concepts of coziness and conviviality that we crave other languages’ cultural concepts of social and emotional wellbeing?

Hygge has come to refer to cultural elements of Nordic life that aren’t exactly unheard of in other cultures: the simple, convivial joys of home comforts, family life, and friendship. Are these concepts universal to the human experience or are they not? The warm, flickering glow of candlelight, hand-knit socks and sweaters, the tight-knit social circles, that’s all very hyggelig. Safety and stability, and oh, a comforting, controlled conformity (that’s sometimes referenced symbolically by Danish right wingers as a way to exclude outsiders). Perhaps that’s also a part of hygge. It seems like a true understanding might be difficult to determine, as hygge and a rough English translation like coziness sometimes seem more like words apart.

Having offered such a lovely, handmade, hyggelig gift to the Danish language, these days the Norwegians prefer use “koselig” to mean pretty much the same thing, where neighboring Northern European cultures might similarly use “gezellig” (Dutch), “mysa” (Swedish) or “Gemütlich” and “Heimlich” (German). Interestingly, like the gray side of hygge, a word like heimlich” (homely) also has ambivalent nuances related to concealment, fear, and the uncanny, as Freud pointed out. It seems that outside the inviting glow of candlelight the night is dark and full of unhygge terrors. But is this the kind of sense that translates to English speakers who’ve adopted the hygge trend? Probably not. Now if “koselig” (another recent object of fascination to English speakers) sounds more familiar to English speakers, that’s because it’s related to “cozy” and so we might be able to come full circle, within arm’s reach of a meaning (if we can just put our fingers on it), just like a good old-fashioned North American hug. (Though the jury is actually out on whether “hygge” has etymologically anything to do with “hugs,” even during the cold Scandinavian winters).

Just what is it about “untranslatable” words that fascinate us so much?

Examples like hygge and koselig actually follow a long line of foreign words that fascinate us. In English, we tend to borrow quite a few “untranslatable” words and idioms, like the ever-popular German Schadenfreude (pleasure at another’s misfortune) and the Sanskrit karma (a Buddhist concept of destiny being influenced by a person’s actions). Perhaps they don’t always mean what they originally meant, but we’ve made them our own.

Just what is it about “untranslatable” words that fascinate us so much? There are endless lists and articles on these beautiful words, so apparently alien to English, that are simply “untranslatable” or even the hardest words in the world to translate… but then they’re subsequently translated anyway, in English sentences, just not in words that are directly equivalent. Untranslatable words aren’t really untranslatable at all. When we unpack this concept it raises a number of curious questions.

What’s so special about a single word capturing a concept, as opposed to a phrase or a sentence? If a language doesn’t have a word for something, does it mean its speakers have a harder time understanding that concept cognitively? For instance, if a language, such as Tarahumara, a Uto-Aztecan language of northern Mexico, has no name or lexical distinction for a particular color perception, such as between green and blue, are speakers of that language cognitively unable to differentiate between the two colors? Likewise if some Eskimo languages have many distinctive words for snow, are we as English speakers completely unable to tell the difference between all the kinds of snowy precipitation there can be?

When you deal with the untranslatable, you inevitably bump into linguistic relativity, popularly known as the Sapir-Whorf Theory (though Edward Sapir and his student Benjamin Lee Whorf were both developing their theories from earlier research). In 1929, Sapir stated:

Language is a guide to ‘social reality’. […] it powerfully conditions all our thinking about social problems and processes. Human beings do not live in the objective world alone […] but are very much at the mercy of the particular language which has become the medium of expression for their society. […] The fact of the matter is that the ‘real world’ is to a large extent unconsciously built up on the language habits of the group. No two languages are ever sufficiently similar to be considered as representing the same social reality. The worlds in which different societies live are distinct worlds, not merely the same world with different labels attached.

So are we really at the mercy of the language we speak? How much influence does it really have? Will words be forever lost in translation? Popular culture has often been high on the idea that language, the thinking person’s gateway drug, alters reality. This is the strong form of the Sapir-Whorf Theory (explored to heartbreaking effect in the recently released xenolinguistic film Arrival). Mostly attributed to Whorf’s work on the Hopi language, it claims that language is not just used to objectively report experiences, but to fundamentally shape and influence how we see the world. Before he was a linguistics student, Whorf was, oddly enough, a fire prevention engineer, and it was curious to him that, “English speakers used the words ‘full’ and ’empty’ in describing gasoline drums in relation to their liquid content alone; so, they smoked beside ’empty’ gasoline drums, which weren’t actually ’empty’ but ‘full’ of gas vapor.”

It can be easy to assume that a culture that simply can’t “understand” or directly “translate” a concept from another language is missing something, or is more alien, or somehow less intelligent than cultures that do have those lexical distinctions, but this isn’t so. Studies such as Kay and Kempton’s experiments on color shows that a strong Sapir-Whorf theory doesn’t really hold water (as well as being full of gas). Speakers of languages that don’t have a lexical distinction for certain colors are still perfectly capable of identifying differences between those colors.

Though we may not have equivalents in English for some of these concepts, most of us can surely recognize the quiet sentiments expressed in these otherwise alien words.

Most linguistic researchers have abandoned the more radical view of linguistic relativity, but it still fascinates the general public. It’s not hard to understand why. The idea that language is magical enough to alter our realities and open us to new experiences is pretty fascinating, even if it may not be entirely true. As Whorf shows, in a weaker form of the theory, people still do have habitual ways of thinking and behaving, found in the words they use, that do influence their cultural experiences. Leonid Perlovsky’s work describes how brain imaging experiments show that “learning a word ‘rewires’ cognitive circuits in the brain, learning a color name moves perception from right to left hemisphere.” So although words can be translated and understood adequately in long form, capturing a concept within a single word may indeed be cognitively meaningful.

Of course, as Robert Frost put it, “poetry is what gets left out in translation,” especially when words are clinically explained. We may be drawn to words not in our own language because they fill a gap or capture a sense that we can, as humans, understand, yet don’t have a name for. It’s interesting that among these lists of fascinating untranslatable words, you rarely find words that describe different foods or concrete objects, such as tools, that are important to a particular culture. Instead, they may emphasize a sense or nuance we can relate to or even long for, that have to do with emotional states. So is there something we, as English speakers, are lacking or looking for? Perlovsky makes a claim that “English evolved into a powerful tool of cognition unencumbered by excessive emotionality,” but that “current English language cultures face internal crises, uncertainty about meanings and purposes.” Perhaps this apparent insecurity and ambiguity made the English language a lonely hunter for the untranslatable emotional words of the world.

Subtle nuances found in words such as the Portuguese saudade (a melancholic longing for someone or somewhere far away), Russian toska (tocкa) and Welsh hiraeth (a nostalgia or longing for one’s homeland), Japanese mono no aware (物の哀れ) (the pathos or empathy for transient things) and German Waldeinsamkeit (the feeling of solitude when alone in the woods) are some of the examples that Tim Lomas has categorized as “untranslatable” words of emotion and wellbeing. Though we may not have equivalents in English for some of these concepts that have been explicitly identified and lexicalized in other cultures, most of us can surely recognize the quiet sentiments expressed in these otherwise alien words. Being exposed to these words means we don’t just have a window into another culture but as Lomas puts it, it opens us to a more enriched understanding of our own world experiences and gives us a conceptual vocabulary of positive emotional states that might guide us.

In a confusing, turbulent world, perhaps a new language of wellbeing, however roughly translated, is what many of us are hoping to learn.

The post The Cozy Linguistics of Hygge and Other “Untranslatable” Words appeared first on JSTOR Daily.

Very British Villains (and Other Anglo-Saxon Attitudes to Accents)

$
0
0

It’s a linguistic truth universally acknowledged that any story worth telling must be in want of a very British villain. It’s a familiar trope, as evidenced by this US-made Jaguar ad in which Ben Kingsley, Mark Strong, and a tea-sipping Tom Hiddleston embrace the inevitable dark side of their national identity.

Whether it’s Nazis, Romans, countrymen, or other bad guys of yesteryear (regardless of actual country of origin), it seems the prestige accent of villainy (unless it’s a terrible death whinny) has typically had something in common with the Queen: namely, the Queen’s English, a dialect that is at the same time both terribly posh and deliciously evil. As Julia R. Dobrow and Calvin L. Gidney point out in a study of villains in children’s animation, American programming in particular seems to have a general ambivalence about British English, as “speakers of British English are portrayed dichotomously as either the epitome of refinement and elegance or as the embodiment of effete evil.” This crystallizes the love-hate part of the two nations’ special relationship. Considering other studies have shown that American speakers might have a mild inferiority complex about their own dialects compared to British English, this is telling. (But things are slowly starting to change in Hollywood. Now other British accents are getting a turn; in Deadpool the accent of villainy is Cockney).

Why is this so? Is there something inherently villainous about British-inflected speech (at least to Americans)? Are they just more capable of dastardly deeds than the rest of us, through the magic of their plummy accents alone? Who would have thought mere accents could be so powerful? It’s actually a curious fact, according to Davis and Houck, that speakers of the prestige Received Pronunciation (RP) accent (otherwise known as the Queen’s English or BBC English) are regularly evaluated by non-RP speakers as more educated, intelligent, competent, physically attractive, and generally of a higher socioeconomic class. At the same time, in terms of social attractiveness, those same posh RP speakers are consistently rated less trustworthy, kind, sincere, and friendly than speakers of non-RP accents. Sounds like a good start for a villain.

Meanwhile across the pond, there’s also a different prestige accent at work in many forms of popular music. The desirable accents of pop, rock, country, R&B, hip-hop and so on, as many have noted, are almost always some flavor of American English. Not even the most British of villains would try to deny the power of pop, as countless Brits, from Adele to Led Zeppelin (among others) seamlessly code-switch into American accents when performing and then back into their regular speaking voices when not. When non-Americans perform in regional accents (sometimes not even their own), such as Billy Bragg or Mockney artists like Kate Nash, Blur, or The Streets, it’s definitely marked and can even sound “off” to some listeners.

Many who consider accent as a marker of authenticity and personal identity may wonder why some would “fake” an accent, but many performers may not even realize they’re code-switching, as they unconsciously adopt the language stylings of the modern song—it’s just the way you’re supposed to sing in that particular genre. (Similarly, consider the early pseudo-British vocal work of American pop punk bands, such as Green Day, following the lead set by the Sex Pistols or the Clash). So is it weird to change your authentic accent to fit in with your day job?

This is not to say that pop singers have to sound American and villains have to sound British, but that accents, seemingly a habit of mere sounds, have an insidiously powerful effect in our daily lives and we often don’t even notice it.

The truth is people really love accents. Whether listening to accents, learning about their oddities, or sometimes even imitating them in front of complete strangers, the different ways to say the words that we’re so familiar with has us all fascinated. Do you say “PEE-can” or “pe-CAN“, “caramel” or “carmel?” Do you speak Oirish or Strine, Scouse or Brummie? Or are you one of those blessed few who “have no accent?” But there’s more to it than a simple enjoyment of the ways people speak the same language. We often share the same language attitudes. Some accents we love and some accents we love to hate with a passion (and for no particular reason). Some are mellifluous and others ugly, harsh, or grating. We hold tightly to what we’ve learned about different accents and what they might mean for us. Accents can say so much about a person, some of it good, and some of it rather dubious, and depending on where you’re from, it changes. We start absorbing this information early, as children, often through depictions of the accents of different characters and archetypes we experience in children’s shows, before carrying it over into real life.

British villainy as an amusing stereotype for entertainment is one thing. How about your regular, everyday criminal? Can we, Minority Report style, predict and weed out the criminals in our midst as soon as they open their mouths? What about detecting other personal characteristics, such as how often someone bathes or brushes their teeth, from the way they talk? Can you tell how physically attractive they are, how tall, how smart, how funny or how friendly, just by their accent alone?

Just like the old school, pseudoscientific methods of phrenology and graphology (feeling the bumps on a head or the flow of a person’s penmanship and tying these to their personal or mental traits), it starts to sound pretty farfetched. How on earth can you tell whether someone’s dirty or clean or tall or short or itching to be a criminal from the sound waves they make? A person’s accent can’t possibly predict all these attributes. And yet, we act as though this is entirely possible—and even reasonable.

It turns out many of us believe, often without realizing it, we can predict social and personal traits about a person, simply by the accent they use. We may be wrong, but we do it anyway. What’s more, we frequently make prejudicial judgements and decisions based on these underlying beliefs and stereotypes about a person and the way they speak regardless of the reality. It’s the  “last acceptable prejudice” in part because people are generally not even aware they’re doing it. We may even legislate for and against certain modes of speaking and allow for discriminatory acts based on accent alone that we wouldn’t dream of allowing based on race, say. Yes accents, mere sounds, are apparently that powerful.

Linguists and psychologists have long been aware, through multiple studies on the perception of different dialects and accents, that people’s language attitudes and social stereotypes can affect how certain speech communities and their speakers are viewed, often triggered by just the accent. Since the 1960s these studies have used what’s known as the “matched guise” technique, in which one person or stimulus can present two guises to listeners, such as code-switching between two accents, or using two pictures of different ethnicities as a visual for the exact same audio recording of a single accent. Listeners can then rate and evaluate the personalities of each guise for things such as intelligence, competence, physical attractiveness etc.

Some interesting findings have come out of these studies. For instance, more prejudiced listeners can have a harder time cognitively processing and understanding what was said if the purported ethnicity of the speaker (even if it’s just a photo) doesn’t match the standard accent as they might expect. Similarly, in another well-known example, a university lecturer gave exactly the same talk in a Received Pronunciation (RP) accent (otherwise known as the Queen’s English or BBC English) and again in a Birmingham accent. Students rated his intelligence and his talk more highly in his guise as a posh RP-accented lecturer than the students in the exact same talk he gave using a Birmingham accent.

In fact the poor Brummie accent has been rated as even less attractive and less intelligent for British speakers than just some random person staying completely silent. Even worse, a study has shown that matched guise “suspects” were rated as significantly more guilty of a crime when they spoke with Brummie accents than when those same suspects used their RP voices. So obviously some listeners believe they can predict the criminal element through accent alone. It’s a tough life being from Birmingham, clearly. Yet American listeners, not having access to the same common social stereotypes, often rate the Brummie accent as pleasant-sounding. So it’s nothing innate in the sounds of these stigmatized accents themselves that make them so despised by certain listeners but simply a shared social attitude that as a non-standard accent, they’re somehow less worthy than the prestige accent.

Rosina Lippi-Green’s work on accent and discrimination has pointed out how the ingrained concept of “Standard Language Ideology” has allowed accent discrimination to flourish and thrive, even as there are laws against overt discrimination on other similar bases such as race. While there is certainly an accepted standard form of the language, it’s by no means the only linguistically legitimate form of English. The standard language ideology that we’ve all been taught insists that there’s only one correct form of language. Speakers of the standard form are considered the ones that “have no accent” and any dialect that strays from from that is stigmatized in one way or another. Believing in this concept legitimizes the institutional discrimination of those who don’t use or didn’t grow up with the standard language. The reality is of course that everyone has an accent.

Because of how they’re judged by other speakers, accents have a palpable effect—taught in schools, broadcast and policed by the media and further reinforced by how we work. Thanks to the wrong accent, people have lost jobs or promotions or civil rights court cases, despite being able to perform their jobs perfectly well. Yet to most, it doesn’t seem at all weird that, just like pop stars “faking” an accent, in order to get a job, large segments of the population are being advised to completely change the accent that they grew up with, from Birmingham to Brooklyn. Speakers of non-standard dialects are often assumed to be incapable of learning the “correct” forms and therefore evaluated as less intelligent and so on it goes. Accents viewed as attractive garner attract personal qualities for their speakers, such as height and beauty and intelligence, while speakers of unattractive accents are judged, one supposes, to be poor, nasty, brutish, and short. Such is the life of accents, a more powerful and villainous social force than you might have imagined.

The post Very British Villains (and Other Anglo-Saxon Attitudes to Accents) appeared first on JSTOR Daily.

When Language Can Cure What Ails You

$
0
0

There’s no denying that the news of the world these days is rather depressing. Weird weather patterns, weirder political happenings, war, famine, pestilence, death—just the kind of world to make the well-heeled hedge fund billionaire start thinking about bolt holes and bunkers in New Zealand, according to The New Yorker. With such a raft of social ills to navigate, it’s no wonder that mental health issues are reportedly on the rise, and have been for some time, and that drug addiction numbers are likewise on the rise—thanks to an unhealthy bump in heroin use from North America.

Depressing? Well, it’s enough to make you want to sit down and have some kind of a chat about it.

In fact, in times of trouble, one of the things we humans seem to do best is just that: talk. And as a panacea for all ills, healthy talk is often promoted as the way for us to become even better humans.

Language seems like such an unremarkable part of our lives that we don’t often realize the very real effect it can have to make us feel better (though perhaps sometimes worse) through the wonders of the “talking cure.”  In an age of supposed openness, we’re expected to talk, and talk in a particular way, to demonstrate to others that we have, in fact, healed. We might talk over problems with friends and family and these days, even into the echo chamber ether that is the internet, with completely anonymous strangers. Online, those same strangers can share their most private and personal experiences and support each other—all through the therapy of words. Language: it cures what ails you.

Particularly in North America, therapy through talk, no longer as stigmatized as it once was, has become an increasingly popular and blandly normal way or people to address the health of their inner lives, whether individually with a therapist or as part of a support group. Though it’s much less of a commitment than having holes drilled into your skulls or electric shock treatment as past cures for mental distresses, it’s not uncommon for some patients to have pretty long-term commitments with their therapists. So does it help, and can therapists track how well patients are improving through talk alone?

For many, there’s no doubt that working through difficult personal issues through talk can be very helpful, revealing, and cathartic. Studies have certainly shown how successful therapy through personal narrative can be in dealing with addiction, such as in organizations like as Alcoholics Anonymous.

But as we learn to use the right language to open up about ourselves are we ever in danger of mistaking healthy talk about good health, for the cure itself?

Most people, when asked how they are, having learned the unwritten cultural rituals of small talk, would not dream of talking literally about the state their health. Hearing that a friend, perhaps one step removed via social media, is more or less “fine” allows us complacently to think nothing more about it, whether that one word masks a different reality or not. This is interesting considering use of social media networks such as Facebook has been linked to increasing unhappiness and depression. Language, especially when it comes to describing and negotiating the inward life, can easily end up reflecting certain social and conversational customs, reinforcing ideologies of longed-for states instead of actual facts. So how can you tell if someone is cured of a mental malady? If they tell you they are, using the words that sound right?

In New York City alone there have been many calls for increased mental health funding, and millions have been poured into tackling the very real issues with mental health and addiction, particularly affecting the homeless population. There’s no doubting the good intentions of social workers, police, and case officers who try to help people get back on their feet, but tracking how well institutionalized therapy is working out for their clients can be fraught with difficulty, because usually it’s all about the words they use, are actively heard to use, and that they’ve been trained to use, in an inherited ideological framework of therapeutic talk, the foundations of which are rarely questioned in the field. Healthy language.

E. Summerson Carr’s fascinating ethnographic study on language use in a drug treatment center (known as Fresh Beginnings) for homeless women underlines how drug rehabilitation treatment very much revolves around a drug user’s language use, in a context where they’re aware they need to achieve certain steps in order to be considered successfully treated. Carr points out the familiar discourse structures, opening with “Hi, my name is X and I am an addict,” and the structured stories of very personal histories, using the right kind of guided therapeutic language, creates an account for their problems and makes it easier for therapists and social workers to do their work within the inner lives of their patients.

In this kind of methodology, Carr states that “many drug treatment scholars propose that autobiographical talk helps addicts break through the ‘denial’ thought to characterize addiction and thereby ‘find’ themselves.” The way to be cured is to talk about yourself and your problems, and if you don’t talk “authentically” in the way expected of you, you’re in denial. Self-referential language, honesty, openness, and willingness to tell secrets, even private business unrelated to the addiction, are seen and assumed to be evidence of “healthy” language. Secrets are seen as things that “make you sick”, not only for what they hide, but as evidence of denial or reluctance to come to terms with addiction that generally relies on secrecy and keeping information away from others.

With all the best reasons behind it, this kind of ideology can start to develop into a system of codified checks and punishments as clients can end up being forced to leave the program for not participating linguistically in the right way. Carr describes Allan Young’s study of Vietnam vets in a psychiatric unit which showed “the clinical demand for patients to verbally disclose the ‘contents’ of their trauma-laden memory and the punitive measures reserved for those who do not engage in the work of authentic linguistic representation.” A program like Fresh Beginnings, even if young, very fresh, and well-intentioned, may often come pre-saddled with ready-made ideological and institutionalized assumptions about “healthy” language in the same way. While some social workers and therapists, systematically working with a endless stream of those who need help, may not always be aware of these assumptions, Carr points that their homeless clients, anxious or forced to complete a program that would grant them a clean (linguistic) bill of health, are often acutely aware of the ways they’re meant to authentically talk about their personal issues and narratives, with some finding it an intrusive, painful struggle to reveal even the smallest detail that could be used as a healing explanation for addiction. Those who choose to engage with the treatment process authentically and use the expected language reportedly do find it helpful, but for others, there is often not much of a choice between revealing inner secrets or being accused of being in denial, “a major barrier to recovery.”

It turns out there’s a lot of work, or “metalinguistic labor,” involved in bolstering up these assumptions about healthy language use, by therapists, case officers and by willing clients. From “secrets keeping you sick” to “Honest, Open and Willing,” recited mantras at places like Fresh Beginnings allow therapists to legitimize, guide, sanction and filter talk, with highly restrictive rules, without “protest or critique,” in the belief that words can only heal if they also reveal. This is not an uncommon belief, if you look at the culture of talk shows, juicy tell-all memoirs, and even internet forums where secrets are revealed. There is a relief to unburdening, when a person chooses to. Certainly it’s not in dispute that articulating inner struggles and stories can go a long way to resolving ongoing problems.

But it’s not just about content—speakers must also adopt and learn the highly structured “language game” of healthy talk. Some therapists not only believe the right discourse can start to heal long-term problems, but at the same time they can more easily track how successful their clients are at moving through the steps of a rehabilitation program or therapeutic methodology, through how well these healthy speakers are able to play this therapeutic language game. This shows how successful that particular program might be at treatment, through how many healthy speakers have successfully navigated through it. Without denying the advantages of this kind of talk therapy, it does seem a little circular.

While the healing power of talk has had many clinical successes, it’s important for therapists to be aware of how easy it is to impose their institutionalized ideological beliefs on language use, particularly on clients who may lack choice and power, on the healing process itself. On the other side, for the many in frantic pursuit of mindfulness in times of ever-increasing uncertainty, how does this linguistic code of therapy, naively absorbed from day time talk shows and formulaic narratives, obscure what it means to be mentally healthy? How might a reductive, sanitized language of therapy, in systematically translating all the quiet secrets of a unique inner life into easy to swallow labels, actually hide real understanding of the causes of mental turmoil? It’s certainly something to be aware of, that words, as powerful as they are, may not always tell a complete story. It’s a mistake to think that the cure is always the same thing as talking about the cure. Even the best articulated words of health can’t be used as an easy substitute for a sound mind and body.

The post When Language Can Cure What Ails You appeared first on JSTOR Daily.

The Totally “Destructive” (Yet Oddly Instructive) Speech Patterns of… Young Women?

$
0
0

Two years ago, this column sprang into life by enthusiastically wading into the absurdly long-running debate about some of those “destructive speech patterns” of young women and how they’re doing it all wrong.

Now, more than ever, with misogyny so often a focus in the news, it might be interesting to revisit this subject. What’s been going on in the world of these supposed linguistic innovators? Given the relentless language policing by those who believe they know better, have young women listened, mended their dastardly ways and finally gagged themselves with spoons?

But first, just what are these vocal patterns that put certain pearl-clutching social commentators into a state of moral panic, and why? Here’s the lowdown.

By now, everyone knows about uptalk?, in which declarative statements are said with a high rising question intonation, apparently popularized by SoCal Valley Girls and the Kardashians. It’s a linguistic feature that exists in many other dialects of English (also with varying degrees of acceptance), and though it may be true that many young women use it, as we found, really everyone uses it, regardless of gender, age or ethnicity.

But if you think it’s much hated because obviously not even a mother could love a high-pitched Valley Girl voice, consider vocal frrry, in which the pitch is often lowered, using creaky voice, much like you might imagine coming from the gravelly tones of a respectable aristocratic gentleman. It’s impossible to say whether this verbal pattern might be an unconscious response to constant criticism of uptalk (probably not) but it doesn’t matter anyway, because it is also only suspect when found in a young woman, even though, again, everyone uses it. So whether you use high tones or low tones, if you’re a young woman, apparently you should just tone it down.

This is not to mention other usual suspects, you know, um, okay, like, fillers! Unnecessary fillers and sort of indirect hedges instead of pure silence! Many critics have complained that young women’s language is so very indirect, with never-ending run on sentences. But that’s not all. There are many other discourse markers in English that have been attributed largely to women’s speech, such as tag questions, that are also often considered annoying linguistic tics, aren’t they? Well, aren’t they?

For Robin Lakoff, who wrote the influential (though slightly unscientific) work “Language and Woman’s Place” in 1973, these verbal habits are indicative of one thing: women’s lack of confidence.

In the seventies, it was a valid concern that women’s speech should appear and be received as strong voices with something to say. Based on the anecdotal evidence around her, Lakoff assumed that certain speech patterns were used more often by women than by men. But subsequent research has questioned whether this really is the case.

Betty Lou Dubois and Isabel Crouch, in a limited study, attempted to verify Lakoff’s claims and found that no definitive statements could really be made about whether women use more tag questions than men. Eric Schleef points out that for discourse markers such as “you know”, “like”, “okay” and “right”, some researchers, following Lakoff, might accept that more women use these fillers, but in certain other statistically significant studies, young men were found to use “like” more frequently than young women, for example. For many researchers, the assumption that these verbal patterns indicate insecurity or even difficulties with speech production in the case of fillers like um/uh can’t necessarily be validated.

While Lakoff’s work was crucial in focusing research in this area, the varying results of research and social commentary in the area of gendered speech patterns since then can neither confirm nor deny Lakoff’s findings. The discourse does bring up some important questions about the narrative that’s been set in place about power, status, and young women’s speech patterns.

Whether these speech patterns are viewed as women’s talk, it’s clear that they are also widely used by other speakers regardless of gender or age, and often only marked and stigmatized when that speaker happens to be young or a woman. It’s interesting that many people get rather annoyed when they hear these speech patterns coming from a woman or a younger speaker (but often are oblivious when the same verbal tics are used by a man). Others might feel concerned or condescending about what this means for young women and their so-called bad speech habits. For many, even academic researchers who find that young women and men might employ the same linguistic patterns, there’s a prevailing and socially accepted belief that in order to get ahead, whether in job interviews or in life, these unassertive young women will have to change the way they speak.

Case in point: Naomi Wolf, who one might assume would be positioning herself as a champion of young women’s language innovation. Not so. With the best of intentions, Wolf falls into the trap of assuming that the naturally occurring speech of young women is destructive, “disowning” their power by the mere sounds they make. Similarly, Jean E. Fox Tree quotes a communications professor who can’t see the forest for the trees, stating:

“The use of filler words (‘like,’ ‘you know,’ ‘umm’ and ‘you know what I mean?’) has always been a problem, and I find that much of the time, the students who use them the most do not even realize they are doing it,” he said. “It has become a way of speech because it is easy, it is the path of least resistance. I would even go so far as to say that there is a correlation in our culture between communication skills and character development.”

Alas. Well, linguists also get angry and even frustrated, but for different reasons.

In an NPR story on vocal fry and policing young women’s speech, linguist Penny Eckert says:

“It makes me really angry. And it makes me angry, first of all, because the biggest users of vocal fry traditionally have been men, and it still is; men in the U.K, for instance. And it’s considered kind of a sign of hyper-masculinity … and by the same token, uptalk, it’s clear that in some people’s voices that has really become a style, but it has been around forever, and people use it stylistically in a variety of ways—both men and women.”

Meanwhile, researcher Jena Barchas-Lichtenstein writes, in reference to a New York Times article cautioning their readers to stop using filler words, that

“Reporting on language often frustrates me, and this was no exception. In fact, thirty-odd linguists—including me—sent them a letter detailing our many concerns with this article. In particular, the article makes two major mistakes:

1. It doesn’t address the many valuable functions these words play.

2. It perpetuates a sneaky type of bias against women and young people.”

Likewise, for Fox Tree, “unlike the general public, most researchers who have studied um, uh, like, and you know agree that they are meaningful and functional.” So despite the muddied waters of gendered language, linguists have long been aware of another story that the public and armchair language critics are only slowly beginning to understand. Not only are the speech patterns of women and young people actually widely used by other speakers of the language and so therefore unfairly stigmatized, but discourse markers, far from being destructive and meaningless verbal eccentricities revealing insecurities and powerlessness, have an instructive function in language.

So, just because certain discourse markers are popularly viewed as belonging to a particular group of speakers, we can’t make blanket assumptions about that group’s status or power based on those markers. Discourse markers allow speakers to convey conversational functions such as holding the floor when speaking, sending cues that their listeners can take their turn in the conversation, or making sure that their listeners can follow the conversation. They help reflect an individual speaker’s metacognitive state to their listeners. For example, researchers found that spontaneous use of fillers like “um” can help listeners pay more attention to the intended word to follow. Though seemingly the same as a question intonation, uptalk allows the speaker to track whether their listeners are following and receiving the information being offered.

There is thus an important method to the madness of these maligned speech patterns, but what happens when an entire group of speakers is constantly asked, nay, begged, to drop these linguistic quirks and talk like sensible people? Are young women even likely to listen?

William Labov’s work showed that women are at the center of a hotly debated gender paradox, where they are at the same time conformists, following the rules, and non-conformists, being language innovators. (And no, this is not a case of women trying to have it all). Although men have often been assumed to be the bearers of standard language, Labov found that it’s actually women who tend to use the standard language and avoid stigmatized forms. On the other hand, they’re more likely to be at the vanguard of language change (that may later become stigmatized).

So will uptalk and its ilk ever go away, at least for young women and young people? Certainly it’s alive and well in a casual speech context, but what about situations where speakers are more formally instructing others or delivering information, such as in an academic setting or an interview? Though, like Lakoff, we can only make an anecdotal observation on this point, it may be that, at least in certain instructional speech contexts, uptalk is already being reduced in the speech of young women. Given Labov’s paradox, we might assume that young women, admonished to drop certain speech patterns to fit the “standard” language, might just do so.

If the functionality of uptalk becomes less available, young women might be turning to other methods of signaling the same cues, right? Take the question tag “right?” which seems to be growing unobtrusively popular in the general American speech of young women, if these podcast examples are anything to go by. In certain examples the tag seems to lose much of its question intonation, becoming more of a declarative filler. Rather than asking for agreement or approval, right tracks whether a listener is following, similar to the “yeah?” question tag common in British English.

In an interview with Reshma Saujani, an advocate for girls STEM education:

https://soundcloud.com/inflectionpointradio/reshma-saujani#t=6:05

6:05: “So our results are incredible, right?”
7:30 You’ve got very ambitious goals, right?
8:14 It’s because of The Social Network, right?

In an interview with Kathryn Minshew, Co-Founder of The Muse:

21:50 You really can only connect the dots in retrospect, right?

Though uptalk makes an appearance from time to time, its use and influence seems surprisingly lessened. Also fascinating in these interviews is the apparent emergence of a kind of listing intonation discourse marker, when the speakers are not really sharing items in a list at all but just telling a story. For example, if you listen to the above podcast at 36:25 for the lines “No more prepping… and be ourselves… we’d like to invite you to be part of the program” these are all uttered with a rising tone that’s less like uptalk and a lot more like keeping track of a list, perhaps allowing the listener certain verbal cues to structure the story into parts, helping them to follow any ambiguous “run on sentences” and know what to expect next. Interestingly some researchers believe that uptalk in Belfast English may have stemmed from a similar listing intonation, as opposed to a question intonation.

In the same way we wouldn’t expect speakers of a different language to suddenly drop their speech habits in favor of English, isn’t it about time we stopped assuming the linguistic patterns of women and young people are destructive and should change, especially when it often matches what’s considered the norm?

The post The Totally “Destructive” (Yet Oddly Instructive) Speech Patterns of… Young Women? appeared first on JSTOR Daily.

Friend or Faux? The Linguistic Trickery of False Friends

$
0
0

Dear language learners: have you ever embarrassed yourself in Spanish… enough to cause a pregnant pause? Ever talked about preservatives in food, in French, only to receive weird looks? And why should you think twice about offering a gift to a German?

Hapless language learners around the world have fallen into this common linguistic trap countless times: while learning a language, you desperately reach out for the friendly familiarity of a similar sounding word in that language—only to be met with semantic treachery! Confusingly, the words may not always mean what you might assume from what they sound or look like. Hilarity ensues (for your listeners at least) as the dastardly “false friend” strikes again.

In Spanish for instance, “embarazada” sounds like English “embarrassed” but actually means “pregnant.” The sneaky looking “préservatif in French means “condom,” as it does in most other languages that use a version of this Latin word (preservativo in Spanish, Italian, and Portuguese, präservativ in German for instance)—except for the outlying English language. Definitely an odd thing to find in food. And as for the poor Germans sidling away nervously if you offer a gift, “gift” means “poison” in German. On the other hand, any Norwegians standing aimlessly nearby might suddenly be intrigued by the offer because “gift” in Norwegian means “married.”

False friends, as many may already know firsthand from their own unfortunate linguistic encounters, are those confusing words and phrases that appear or sound identical or similar to words in their own language, yet have different meanings or senses. The term comes from the longer phrase “false friends of the translator” coined in 1928 by French linguists Koessler and Derocquigny. Since then, they’ve also been called false cognates, deceptive words, treacherous twins, belles infidèles (unfaithful beautiful women), so as we can see, this inadvertent lexical trickery apparently gives people a lot of feelings.

Though often seen as a kind of amusing but inevitable rite of passage for the budding translator or language learner, an embarrassment of hilarity is not the only thing to come out of this. The existence of false friends can have a major impact on how information is received by people across different cultures, cause serious offense and misunderstandings, and can actually start to change languages, exerting pressure on how the semantics might shift, through influential contact from other word senses.

Many examples are benign, such as the etymologically unrelated Italian “burro” (butter) and Spanish “burro” (donkey), or Spanish “auge” (acme, culmination, apogee), French “auge” (basin, bowl) and German “auge” (eye). These all happened to converge into the same form at the same time, from different cognates. Making a mistake with these words might result in a laugh or two, but some other lexical traps have a more interesting effect on communication.

False friends do not always stem from false cognates. They can diverge markedly in word sense from the same etymological origins, through semantic changes such as pejoration or amelioration as speakers move away from certain meanings and towards others. The fact that they clearly appear to come from the same source can actually cause confusion when we least expect it. Consider a longer word like “fastidious,” which has come to develop a slightly more positive nuance in English (attentive to detail) compared to its cognate counterparts in the Romance languages, fastidioso” in Spanish, fastidiós” in Catalan, fastidieux” in French and fastidioso” in Italian. All these words were drawn from the same Latin word “fastidium,” meaning “loathing, dislike, disgust. Once again,  English is an outlier, as the Romance versions stay truer to the original negative sense, with meanings like “annoying, irritating, boring,” etc. This apparently once caused a minor diplomatic incident at a conference, according to researcher Chamizo Domínguez, when an English speaker approved of a Spanish delegate’s speech as “fastidious,” which was misunderstood to mean that it was boring. 

So what’s the cause of this? How do false friends arise and why does it seem like English is such an oddball compared to other European languages in the way its semantics have changed over its history? Research has recorded multiple examples where most European languages follow each other in maintaining a certain word sense, while English seems to go another way. “Eventually” (in the end, finally), for example, means “perhaps, possibly” in the German “eventuell” and Spanish eventualmente.” Other examples are “actually” (“really, in truth” in English vs “currently” in other languages), “fabric” (“a textile” vs “factory”), “etiquette” (“polite behavior” vs “label”) and even “billion” (“a thousand million” in English vs “a trillion” in other languages). Make a mistake in your accounting with that last example and you’d have a bit of a problem.  

False friends arise through the various actions of semantic change. This might appear to happen randomly but often it’s the case that there are identifiable patterns of semantic shifts on groups of words. English seems to have had more major shifts and upheavals than other languages, from the merging of two language families into a single language, with a large part of its vocabulary borrowed from Latinate Norman French, to the Great Vowel Shift markedly changing how words are pronounced, which might account for its outlier status. As the unofficial global language spoken and shared through social media by so many from different cultural backgrounds, it would be understandable if the push and pull of semantic changes happen rapidly and false friends arise.

As languages share words and meanings, the influence of certain words might slowly  and surreptitiously add shifting nuances that may take over a word’s primary sense completely. Carol Rifelj discusses how French has struggled with itself over the many English flavored borrowings that have entered the language, creating false friends —some more obvious than others. Clear borrowings such as “les baskets” (sneakers, from “basketball”) or “le look” (style in a fashion sense) acquire their own senses in the language and might mystify an English speaker, developing into false friends. But what if English speakers all started calling their sneakers “their baskets” and this changed the primary sense of the word “basket” in English? Rifelj observes that this is happening in the reverse direction, to French. Loan words from native English words is one thing, but Rifelj points out how a more furtive semantic change passes by unnoticed by most French speakers, simply because all the words are actually originally French. Les faux amis might suddenly develop into “très bons amis” when French borrows back French origin words, complete with their brand new English meanings. For example “contrôler” (to verify), started taking on a new meaning thanks to English, in terms like “contrôle des naissances” (birth control), yet because the words are French the change passes by unnoticed. “Futur” has taken over many of word senses once carried by “avenir” (future). An English inspired phrase such as “conference de presse” (press conference) has overtaken the old “réunion de journalistes,” and so on.

Well, with all this embarrassing confusion it’s enough to make a person stop learning languages—false friends can also be found lurking in the dialects of the same language, as many researchers have pointed out. George Bernard Shaw is famously reputed to have said “The United States and Great Britain are two countries separated by a common language,” and that’s putting it mildly when it comes to false friends. Misunderstandings of words like “rubber” (eraser vs condom), “pants” (trousers vs underpants), “suspenders” (straps to hold up trousers vs stockings), “biscuit” (hard cookie vs a soft scone), “fag” (cigarette vs a pejorative term for a gay man), “fanny” (vulgar slang for vagina vs backside) can cause some serious blocks in communication, if not outright offense in some cases. From personal experience, as a nervous student anxious to make a good impression on a strict and straight-laced potential landlord, I remember innocently asking if it would be okay to have pot plants in the apartment. “She means potted plants! POTTED PLANTS!” interrupted my face-palming American roommate. It can certainly be easy to make errors because we’re so sure we know what the words in our own native dialect mean that we might not consider or question the new cultural context that they’re said in.

Even within a language or dialect, confusion can reign if speakers don’t consider the pragmatic contrasts in different discourses. Speaking of preservatives, take an example like “conservative,” someone who is aligned to the right of the political spectrum. The word stems from the same cognate as “conservation,” meaning to “keep, safeguard, aiming to preserve” so it might confuse some why conservative political views seem to be ideologically opposed to conservation of the environment. Especially considering Ronald Reagan once said: “You’re worried about what man has done and is doing to this magical planet that God gave us, and I share your concern. What is a conservative after all, but one who conserves?”

Some have pointed out that historically, American conservatives, led by Republican presidents, have been staunch friends of conserving the environment around us, with the National Park System, the EPA and the Clean Air Act all enacted under conservative administrations. A major cultural and semantic shift since then has meant that the conservative outlook today has all but abandoned a strong environmental legacy and become very much a false friend in this regard, with conservative Republican leaders consistently voting against conservation and for big industry polluters instead.

Because meaning can be fluid and languages eventually change, it can be often confusing for speakers, for language learners, and translators to find that finely preserved words and phrases no longer mean what they once meant. While we might have to do more work to overcome the treacherous pitfalls of the false friend, they also preserve a lexical legacy between and within languages that reveals much about the movement of meaning over time.

The post Friend or Faux? The Linguistic Trickery of False Friends appeared first on JSTOR Daily.

The Science of Thingummyjigs (and Other Words on the Tip of Your Tongue)

$
0
0

Recently, I had occasion to buy a scythe (as one does) and it came with some kind of doohickey with the highly technical name of “thingummyjig.”

Just what is a thingummyjig and what secrets about language does it help reveal? As a name for all reasons, like the blank tile in a Scrabble game, thingummyjig acts as a placeholder or stand in word for when you just don’t know what to call something, along with its many variants, thing, thingo, thingy, thingummy, thingumabob and whatchamacallit (just to name a few). We seem to make liberal use of them in English when we need a convenient handle to lorem ipsum the heck out of a situation. Has a random corpse turned up you don’t know how to place? Then meet John or Jane Doe, Roe, or Poe, placeholder names for unknown identities used in the legal world (like in the famous court decision Roe v. Wade). For more information, you can write to whatshisface, Joe Bloggs from Main StreetAnywhere, USA, who can send you gizmos and widgets that will explain everything. There are even more formal words we can use, like apparatus or utensil, which don’t at first appear to be a way of describing something we don’t know how to describe, but are.

It’s not just English—all languages make use of these placeholders. In the French film “Un peu, beaucoup, aveuglément,” the two main characters are simply called Machin and Machine (the French masculine and feminine words for “machine” but also for unknown names, like Whosit and Whatsit). Japanese uses 誰々 (daredare), a reduplicated form of “who” while Italian might use “Coso” (from the word for “thing”) to refer to unknowns.

Placeholders are often said to be restricted to just names and nouns, and the majority of them are, but in casual speech sometimes you’ll hear examples of placeholders being used in other grammatical categories, such as verbs, complete with conjugations. “Did you see how whatshisname thingo’d the whatsit?” (Though to understand this sentence completely you’d have to have a bit more context). Other languages, such as Italian, may have placeholder verbs, like cosare (again, related to “thing”), so I guess the thing is, placeholders are very versatile no matter what part of language you need to replace.

But why are they so common and what are we actually doing when we use them? One reason of course is that stand ins can be very useful for demonstration or presentation purposes, when the real thing might be irrelevant, unknown, or identities have to be protected. We have other kinds of stand ins, for when we just can’t be bothered to refer to exactly what we mean, for example, pronouns like him and her and deictics like here and there, this and that, if you want to get vaguely specific about it.

At other times, placeholders may be handy for those peculiar moments when, for some reason, you can’t quite say the word you mean—you’re sure you know it, you can’t put your finger on it, yet somehow it’s on the tip of your tongue…

This seems pretty weird, because we often think of words as complete units. Either you know the word or you don’t. But say you’ve forgotten someone’s name. You haven’t really forgotten them completely, because you still might be able to remember salient facts such as what they look like or information about their background and how you met them, even as what they’re called escapes you.

Unlike not knowing a word, or forgetting a word entirely, some ghostly semblance of the word may linger in your mind. For all intents and purposes, you actually know the word, it’s just that something is preventing you from saying it. This is referred to, cleverly, as the “tip of the tongue” (TOT) phenomenon (or “tip of the fingers” in sign language), and for many is an obstacle to completing the lexical access and retrieval process when we use language. The ways that it can get resolved can tell us a lot about the hidden layers of language in our brains.

So what’s going on in the mental lexicon about how words are stored in our heads?

It varies, but on average most of us produce 2 to 4 words per second when speaking, which means before we even produce these words we have to have a way to retrieve these words from wherever and however they’re stored, along with all their meaning and syntactic information and other associated facts, superfast. Some researchers think the mental lexicon for the average literate adult holds from 50—100,000 words and that we make errors no more than once or twice per 1,000 words, which is astonishingly low considering how complex these mental computations really are. Before we reach the endpoint of articulating anything at all, we’ve already had to access and assemble our thoughts together into the right words, with the right semantics from a myriad of choices. All this is a remarkable mental achievement, and we do it so quickly, seamlessly, and naturally we almost don’t even have to think about it. But in a TOT state, everything seems perfect, until right at the very end, it no longer isn’t.

To varying degrees, this is similar to a type of anomia or anomic aphasia, when we have trouble finding the words we want but seem to know quite a lot about them otherwise. Although anomic aphasia can be caused by brain trauma such as strokes, TOT states are somewhat common and occur more often as people age. Those who suffer from severe anomic aphasia often find other solutions to get around the “tip of the tongue” problem, such as describing a person, place or object more fully, or using helpful placeholders, when those sounds and syllables just won’t come out of hiding.

Lise Abrams’ research into TOT states has shown how phonology is really the crux of the matter here. Words are not atomic units as is sometimes assumed. Lexical retrieval is made up of layers accessed in sequence, so that in forming our thoughts, we choose the right semantics and encode the syntax of what we want to say before we even begin to say it. The final layer is articulating a word’s phonology, but in a TOT state, that encoding breaks down, often when the word is rarely used or hasn’t be accessed recently. The “forgotten” word might suddenly pop back into your head because of something in your environment. But the question is: what would best help trigger your memory?

Abrams posits that if a tip of the tongue problem is just about access to the phonological layer, perhaps similar-sounding words can resolve the mental block. In her experiments, subjects were asked questions such as “What do you call goods that are traded illegally, i.e. smuggled goods?” (Answer: “contraband”) Subjects were then asked to review a list of words that could be phonologically similar or dissimilar and then asked the question again to see if the TOT state was resolved. Abrams found that when subjects were shown words beginning with the same first syllable as the TOT word, such as “contact” it seemed to help trigger subjects to remember the right word “contraband.” In some cases, the connection could be one step removed, as words like “motorcycle” compared to “helicopter” can help resolve a TOT state for the word “biopsy,” because of the associated term “bike.”

What’s curious is semantically-related words aren’t as helpful in resolving a TOT problem as you might expect. No matter how often you mention smuggled thingummyjigs to help recall “contraband,” it doesn’t work as well as just showing someone a word that begins with the same kind of sound, proving that this really is an issue of phonology and not of forgetting a word’s meaning. Abrams found that offering related words and phrases in the same grammatical category can actually decrease TOT resolution, as those words may compete with each other and get in the way of sparking off the right memory.

This becomes more of a problem as speakers get older, and on a social level those aging or older speakers can appear forgetful and incompetent in their speech, when they may really know the words they want to use. The issue really is one of a weakening of the phonological form from the word’s meaning.

On such occasions, is it any wonder that you might want to use a thingumabob to help lend a certain “je ne sais quoi” to proceedings?

The post The Science of Thingummyjigs (and Other Words on the Tip of Your Tongue) appeared first on JSTOR Daily.

Sentenced to Death (and Other Tales from the Dark Side of Language)

$
0
0

Have you heard the story of the man who was killed by the definite article? That may sound like the beginning of a linguistics joke, but sad to say, it actually happened.

“Sticks and stones may break my bones, but words can never hurt me,” as the old adage goes. When battling bullies, we may take heart in this childhood rhyme, but the truth is, language can be more dangerous than we often assume—it can kill.

In later life, we grow more familiar with another, darker side of language, some flavor of “anything you say will be taken down and may be used in evidence against you.” There’s a reason there’s a right to silence in many jurisdictions, given how loosely the law, often oblivious to the pitfalls of forensic linguistics, can interpret language as evidence.

One cold January morning in 1953, Derek Bentley, a nineteen-year-old, barely literate youth in the wrong place with the wrong words, was hanged for a murder he did not commit. During an attempted burglary a couple of months earlier, a policeman had been fatally shot by Bentley’s sixteen-year-old friend Christopher Craig, a minor who legally could not be sentenced to death. Derek Bentley, who never held the gun, was held to blame for the crime.

The evidence hinged upon an ambiguous sentence. No one could agree on what it meant. When the police demanded Craig hand over the gun, Bentley was said to have called out “Let him have it, Chris!” after which Craig shot and killed a policeman. Did these words mean, literally, “give him the gun” or, indirectly, “shoot him”? Was Bentley a party to murder, as the prosecution would have it, by inciting Craig to kill?

Indirect speech can have a profoundly direct effect on listeners. Most recently, former FBI director James Comey stated he took Trump’s statement “I hope you can see your way clear to letting this go” as an obvious directive. In his testimony, Comey referenced one of history’s most famous examples of a fatally effective indirect speech act, in which Henry II was said to have exasperatedly uttered “will no one rid me of this turbulent priest?” Whereupon, four of his knights set out to assassinate the troublesome Archbishop of Canterbury, Thomas Beckett.

So how could we possibly know what Bentley, who was described as “borderline feeble-minded,” meant to say? In Malcolm Coulthard’s forensic linguistic analysis of the case, two preconditions for Bentley’s guilt were laid out: it had to be proven that Bentley already knew Craig had a gun, and that he had instigated Craig to use it. According to Coulthard, the evidence that finally convinced the presiding judge and convicted Bentley was largely linguistic, based on the troubling police transcript of the statements Bentley had made.

A single word seemed to finally tip the balance: the definite article. In his statement, despite denying knowledge of any gun until Craig had used it, Derek Bentley had supposedly said “I did not know he was going to use the gun” at an earlier, crucial moment in the narrative. In summation, the judge made much of the word choice of “the gun” as opposed to “a gun,” which to him showed that Bentley had known about the gun all along and his language had given him away. Thanks to this errant definite article, Bentley was thus considered an “unreliable witness” who did have prior knowledge of the gun and had contradicted himself by denying this later.

It took just two days for the jury to decide Bentley’s fate and about a month later he was executed without reprieve. The unfortunate case of Derek Bentley is one of Britain’s most notorious miscarriages of justice and shines a spotlight on just how fragile forensic linguistic evidence can be, and how prone it is to mistaken interpretations and manipulation by even the most well-meaning of people.

As we’ve seen, forensic linguistics can often provide the key lead in cold cases such as in the Unabomber mystery, enabling investigators to uncover much stronger corroborating evidence that can lead to a conviction as a result. But there’s a danger of things going horribly wrong when slim linguistic evidence is the only thing standing between the accused and their innocence, especially when that evidence might interpreted carelessly out of context by those inexperienced in forensic linguistic techniques, including judges, lawyers, investigators, and even so-called expert witnesses.

The popularity of police dramas has led a wide audience to believe that DNA evidence is rarely wrong. The reality is that even DNA testing can be flawed, and forensic linguistic evidence is perhaps even more precarious in what it can tell us. But the idea of linguistic fingerprints is compelling. While DNA testing is somewhat out of the average person’s reach, forensic linguistic analysis seems readily accessible. Our native familiarity with language, along with a society’s linguistic baggage and beliefs, can often lead us to think we’re competent enough to judge the simple cause and effect of what language reveals about a person’s identity or intentions. The attraction of this kind of linguistic armchair detective work is it presents problems as safe and controlled puzzles to be solved using knowledge you apparently already have, innately. But life is not a clear-cut whodunnit and answers to mysteries are rarely so simple.

English professor Don Foster may style himself a literary detective, but is an oft-cited cautionary tale of how even those who work in language can make major errors when dabbling in linguistic mysteries and criminal cases such as uncovering an author’s identity, without training or experience in forensic linguistics. Foster had once successfully used his simple literary techniques on author identification, based on word count coincidences, to determine who the anonymous author of the novel Primary Colors was in the late 1990s, given earlier leads that had been put forward by others. When it comes to true crimes and other cases of unknown identity, however, those same rudimentary techniques can result in distressing and wrongful accusations.

Retained as an expert linguistic witness in famous cases such as the murder of JonBenét Ramsey, Foster had accused Patsy Ramsey, the victim’s mother, of writing the crucial ransom note. Using those same techniques prior to his collaboration with Boulder police, Foster had also declared her innocence, believing that the crime had been committed by someone he had communicated with on the internet based on their language use. Similarly, the highly publicized 2001 anthrax case led Foster to wrongfully point the finger at bioweapons expert Steven Hatfill in a Vanity Fair article, destroying his career, and resulting in Hatfill suing Foster and Vanity Fair. Foster later went on to disastrously ruin writer Sarah Champion’s reputation by unmasking her as London’s most notoriously anonymous call girl blogger Belle du Jour, by counting her commas (later revealed to be scientist Dr Brooke Magnanti).

The reality is linguistic evidence  can be highly problematic when looked at in isolation but can be even more so when you take our social and political assumptions about language into account. In the Derek Bentley case, the linguistic evidence of a handful of sentences from the police transcript, a so-called verbatim record, generated so much heated debate back and forth by lawyers and judges over what Bentley could have really intended that, as linguist Coulthard and others have pointed out, a major twist to the story was completely overlooked.

Bentley had steadfastly denied that he’d ever said the sentences that led to his wrongful conviction, either those words verbatim or the sequence in which they were presented. Between a police record, which three officers swore was the result of an unassisted monologue, and the accused, an illiterate man with a history of developmental problems, who should be believed? It’s one thing for a judge or lawyer to sway a jury with a particular interpretation of linguistic evidence, but what happens when the supposedly neutral linguistic evidence itself is unreliable?

We may often believe that transcripts are a true, neutral, straightforward account of verbal interaction. In the absence of recorded speech, they are often all we have to go on, so without thinking, we take them as written. But how reliable are they? You may have heard of the term “verballing,” which refers to false verbal evidence made up by the police, which supposedly is kept in check by audio and video recordings. Even without considering this possibility, it turns out just the act of transcribing speech can fraught with difficulties.

Numerous studies have shown how transcripts are not as objective and reliable as we imagine. For one thing, listening to recordings or taking down live speech can already be perceptually problematic. Studies have shown that listeners can fill in missing speech sounds and that depending on their background, may even “restore” speech sounds differently. So it seems that listeners, in some cases, can just hear what they want to hear.

The act of transcribing an interaction itself is political—power lies in the hands of those who create the transcript, even without intentional fabrication, and often involves an element of interpretative choices that can affect how the information is seen by others. Take a simple example—if a non-native speaker or a speaker with a stigmatized dialect is transcribed with a kind of eye dialect or vernacular variants that’s marked from the standard form, readers may develop a certain perception of that speaker depending on their social biases. How a speaker is represented in the transcript can be problematic when it comes to legal cases that depend heavily on the language contained in transcripts.

In his analysis of the discourse between Bentley and the police, Coulthard shows that what was presented by the police as a true and verbatim monologue record contained discourse markers of a concealed dialogue. When a speaker suddenly volunteers negative sentences with no “narrative justification” such as “up until then, Chris had not said anything,” and “I did not know he was going to use the gun,” in a transcript, it suggests it’s in answer to an invisible leading question. It alters the incriminating perception of Bentley’s statement considerably, because the definite article could very well have been in answer to “a gun” introduced by the concealed question, rather than volunteered spontaneously by Bentley.

In fact, Bentley’s denial that he’d ever said “Let him have it, Chris” was supported the the testimony of both Craig and a fourth officer at the scene who had never been called upon as a witness. It took his family some forty years of dedication before Bentley was pardoned and exonerated, in 1998, of a murder he did not commit.

All this to say, while forensic linguistics can certainly shine a light in the darkest of cases, in the wrong hands, linguistic evidence can be fatally flawed.

The post Sentenced to Death (and Other Tales from the Dark Side of Language) appeared first on JSTOR Daily.


The Strange Life of Punctuation!

$
0
0

Poor punctuation: all rules and no play. Countless style guides over the ages have prescribed the exacting rules for where to put your em-dashes, your en-dashes, your commas, your Oxford commas, your colons—and let’s not even talk about the semi-colon, which has been known to incite fury and debate in even the mildest of punctiliously-inclined folk. Is there anything else so heavily regulated, codified, and coddled as these dull chicken scratchings of written language? Just… follow the rules and no one gets hurt.

“People don’t know why they get so upset about language,” says linguist David Crystal, but for some reason, they do, especially if you appear to break a rule about punctuation. Linguists like Crystal and Gloria E. Jacobs have all heartily assured us that there’s really nothing to fear about innovative linguistic uses of punctuation in the internet age—it certainly doesn’t mean the end of literacy for the texting generation, quite the opposite in fact… but to no avail. The moral panic is real

For instance, you may have heard recently that recalcitrant texters (and the journalists describing them) have been leaving off periods at the end of their perfectly good sentences for some reason. A recent study has determined that text messages ending with the humble period can weirdly seem less sincere (compared to the exact same messages with periods on a handwritten note). For some, adding a period in online text might even signify anger, according to Ben Crair in the New Republic:

“The period was always the humblest of punctuation marks. Recently, however, it’s started getting angry. I’ve noticed it in my text messages and online chats, where people use the period not simply to conclude a sentence, but to announce ‘I am not happy about the sentence I just concluded.’ … ‘No.’ shuts down the conversation; ‘No … ’ allows it to continue.”

Ok. The neutrality of the period has up until now been undisputed in written language. It just marks the full stop of a sentence, nothing more, and yet… You really wouldn’t think the loss of a tiny dot would elicit such interest.

So it seems far from being a boring and remorseful assemblage of dots and dashes, “punctuation is not so barren a field for the study of human nature as the reader may think”, says E. L. Thorndike, in the 1949 paper “The Psychology of Punctuation.” We might think the rules for punctuation are set in stone, but plenty of writers of the past have monkeyed around with punctuation styles, with certain marks going in and out of fashion, depending on who you read (looking at you, H.G. Wells). Thorndike points out that “more than one tenth of the punctuation marks in the first folio Shakespeare (1623) and the first printing (1611) of the King James version of the Bible were colons. They now number about 1.5%” (in contemporary texts). Unlike the decline of the colon, that sneaky ellipsis “…” has gone from an unknown quantity to being peppered all over the place, where “:” and “—” fear to tread. (And it being 1949, all these marks were probably counted by hand, which just goes to show how passionate some people can get about punctuation marks).

For philosopher Theodor W. Adorno (as translated by Shierry Weber Nicholsen):

there is no element in which language resembles music more than in the punctuation marks. The comma and the period correspond to the half-cadence and the authentic cadence. Exclamation points are like silent cymbal clashes, question marks like musical upbeats”

—and not without reason. Punctuation started out as free and easy prosodic units, meant to help the reader read out loud to an audience with all the requisite intonation, tone, pitch and pauses intended by an absent author. Punctuation, in a sense, reminds you that language is really spoken, even if it’s written. Punctuation marks started petrifying in their current places as writers and printers developed codes and customs for the more silent literary language meant to make you sound as though you’ve been to college (as Kurt Vonnegut might have it). So learning to put the right punctuation in the right place among all the written words thus becomes less about speech and cadence and more about a display of literacy and prestige. Punctuation became marks of good grammar. Which I guess is not a bad thing in a crowded paragraph if punctuation gives you order and clarity of meaning and thought.

So what’s a punctuation mark to do in a messaging world? Frankly, internet or online speech is much less classy than formal writing. The use of punctuation in texting, online chat, and instant messaging has certainly evolved quite rapidly, sometimes past recognition for some. So much so that according to linguist Lauren Squires, internet language has developed into another register of language—some might say another dialect, with its own evolving distinctive forms and social meanings, intentions, and subtextual negotiations. Consider linguistic innovations such as acronyms (the old standby “brb,” often pronounced as written), abbreviations (the already outd8ed “gr8” beloved by flip-phoners old and new), spelling variants that reflect the sound of speech and emphasis within the text (“sooo gooood”) and of course also (omg) punctuation!!!!! There’s a lot to say about punctuation and its strange life slash sometimes mysterious disappearance in the internet age.

So who “killed” the period… and why? Was it with the em-dash in the library or the ellipses in the drawing room? Inquiring minds want to know. After a perfectly blameless life calmly separating sentences from each other, how did it start making people sound angry in online speech? As we know, what we conventionally think of as proper writing has, nowadays, very little to do with how language is used digitally. Strict rules from style guides that apply easily to formal writing no longer apply to messages that are often half-formed, half-finished, Gertrude Stein-style, run-on sentences. Punctuation likewise has gone out to play, not just in the construction of emoticons but in a myriad of different rhetorical ways.

With the more speech-like IM, texts, tweets, punctuation is just getting back to its roots, as a way to convey prosodic and speech cues in the absence of sound and vision. Like gifs, emoticons, and emojis, punctuation is another way for the short messages of internet language to use its limited linguistic resources to convey emotion, nuance, and paralinguistic cues, changing it from a purely written form into a kind of breathless and fast moving textual speech that’s really closer to spoken language than written language. Consider also that in internet language, emojis and emoticons occupy more of a meta status in conveying emotional cues—an angry emoticon is probably not sincerely as angry as you think but the overt, ironic semblance of anger.

Now from very little text, and a handful of symbols, you can read so much subtext thanks to the way punctuation use has evolved to subtly signal real speaker intentions. Punctuation, surprisingly, helps people negotiate social relationships online. You can convey genuine annoyance through a simple dot, a developing convention that is also increasingly read as such by all those who receive the message. A period, so useful and subdued in differentiating sentences in a large block of text, seems unnecessarily final and loaded with abrupt meaning in a sparser short form, especially as each separate message is already relatively easy to read without requiring a period. As a result there are often attempts to soften it with the ever popular ellipses and em-dashes (which has a special prestige for lovelorn punctuation fanciers apparently) or made more enthusiastic and emphatic with exclamation marks!!! Some punctuation marks, it seems, are less angry and more cooperative than others.

But it’s not just about the demise of the period. Any nuanced deviation from the norms of internet language might raise a few eyebrows, especially with messages that are very short. The navigation of social relationships online can take some getting used to lest you offend someone with an errant punctuation mark. The loss of a question mark in an innocent question like “what time” or even an overly abbreviated short word, such as “h” for “hi” has been suggested as displaying a lack of care or showing annoyance, and with more and more of us using punctuation less and less in online language, there’s a lot to read into the context when a mark does appear… even if the punctuation mark is acting, in the end, as just a punctuation mark.

And as for who killed the period? Though the current fashion is to use ellipses and em-dashes in texts to soften the blow of a sentence’s end, there’s really only one culprit: the new line break. It’s kind of made the period’s function redundant in instant messages, at least for neutrally separating messages from each other. But as a growing rhetorical innovation to signal negativity, rumors of its demise are perhaps greatly exaggerated—the period won’t be going away any time soon…

The post The Strange Life of Punctuation! appeared first on JSTOR Daily.

Viewing all 29 articles
Browse latest View live