Don't Shake the Flask

Because you don't know if it'll explode

Month: September, 2010

A Nanowrimo Veteran’s WTF Moment

Warning: Grumpy old Wrimo rant ahead.

Through Twitter, someone I follow posted a link to someone else’s weblog.  The blogger is some sort of self-proclaimed networker/marketer type* who has decided to do Nanowrimo this year.  But this blogger has posted a guide to Nanowrimo even though she has never done Nanowrimo before.

What. The. Hell.

I have posted before about my irritation with people who seek to instruct others when they themselves have no experience.  If someone who counted beans for a living suddenly declared that he has written the ultimate bread baking recipe, would you listen to him?  Would you listen to him if the closest he has ever gotten to an oven was to nuke a TV dinner in the microwave?  This case is no different.  If this blogger had just packaged her advice as writing tips in general, I would have no problem with it.  But if you have never written a novel in one month before, do not assume that whatever you say will be golden for this particular experience.  Because what if the unthinkable happens–that you (gasp) do not write 50,000 words in 30 days on your first try?  While I believe that everyone who participates in Nanowrimo will get something out of it regardless of whether or not they make the goal, there’s just something not quite right about someone who has failed–or even worse never tried–attempting to speak from a position of knowledge.**

To add insult to injury, the blogger implores her readers to follow her own special Twitter hashtag for Nanowrimo rather than the already established #nanowrimo.

What really gets me are the blog’s comments.*** Wonderful! the commenters exclaim.  I’ve been trying to find something like this! Even people who are veteran wrimos are enthusiastic about the guide.  And the blogger herself is coy with this praise, Aw shucks, I tried looking in Google and couldn’t find anything so I made this.

You tried looking in Google and didn’t find anything?!  Are you living in 1970 and completely internet illiterate?  Anyone with a speck of common sense would go straight to the Nanowrimo website where there are tons of information, ready at the click of a mouse button. There are also other writers, professional writers even, who’ve done Nanowrimo multiple times and have written guides.  Some of these guides are even online.  Chris Baty, the founder of Nanowrimo, has written No Plot?  No Problem! as a guide to writing a novel in a month.  And if Baty doesn’t know anything about Nanowrimo, then nobody does.

This year will be my tenth year participating in Nanowrimo. This, however, does not mean that I know anything about writing a novel in a month.  Every year, the situation is different.  And every year, I feel like I have something to learn with the effort.  There is always something to learn.  I know no one gives a damn if I have any advice.****  This is fine with me as I feel more comfortable rambling about my own progress or cheerleading others as a municipal liaison.  But I cannot stand it when someone who has never done Nanowrimo before purports to have the wisdom to winning this challenge. It has the effect of rendering the experiences of everyone who has participated completely moot.

*Yeah, yeah, yeah.  That should have raised a red flag in the first place.
**This probably describes about 99.9999% of the pundits in the blogosphere.
***Paraphrased for dramatic effect.
****My advice would probably be crap anyway since everyone has their own method for writing.

Alone at the Lunch Table

After reading Fillyjonk’s thoughts on how being ostracized in junior high has affected how she responds to certain situations in the present, I wondered how my own childhood has shaped me into the person I am today.

I think my childhood was rather typical through elementary school.  I wasn’t the most popular person but I wasn’t the person that everyone else avoided either.  Teachers always made it a point to tell my parents that I was quiet (maybe too quiet), but my introverted nature* at that point had yet to screw up my social interactions with everyone else.  I had friends.  I occasionally got invited to birthday parties.  I seemed to be on rather decent terms with all the other kids.  And then right before sixth grade, our family moved south.

We had moved around before and I had managed to make my adjustments.  But somehow, with sixth grade, all the rules about social interactions with my peers got chucked out the window.  Well, maybe not all the rules.  In hindsight, I think it is all the bad memories that manage to stand out.  Happy memories are nice, but they’re not the sort of things which make or break a person.  Anyways, on the whole, I seemed to get along fine with pretty much everyone (yes, even the catty girls), except for the football jocks.  During middle school, they were the absolute bane of my existence.  In high school, I managed to mostly avoid them so that time was not as miserable as it could have been.  But I still do not understand why I was such a target.

This might be a partial explanation for me developing a rather cynical view of people.  This is also one of the reasons why unlike some other sports, football to this day deserves my particular disdain.

Back to sixth grade.  I think this was about the time when I started to embrace my natural inclination to be a loner.  Sure, there were times when I felt lonely and wished I had someone who I could talk to, and more importantly, understand the sorts of things I was interested in.  And yes, I remember many, many times I sat alone at the lunch table.  But by the time I entered high school, I accepted the possibility that such friends did not exist where I lived.

I had a much happier time in college.  That and the passage of time has blunted any of the bitterness that I’d felt for my middle school years.  And mostly, I try not to remember that time at all.  I can still be weird and socially awkward around strangers. The anxiety is there of course, but in contrast to my younger self–I take all these things in stride. Unlike those grade school football jocks**, most people I know today are not out to get me.  At least, not in any obvious way.

*I suspect my introversion is partly innate–I remember even when I was three living in my own mental world–and partly due to some circumstances out of my control in early grade school.  I came to kindergarten knowing little English.  And in second grade, I had a teacher who scared the hell out of me.
**This amuses me now as I’ve discovered that my advisor was himself a football jock when he had been in high school.

The Omnivore’s Choice

Before reading Michael Pollan’s The Omnivore’s Dilemma, I had pretty definite ideas about food.  Good food, I thought, was fresh food–anything that was found on the periphery of the grocery store.  Whenever I went grocery shopping, I always made a circular pass going from the produce section, to meat, to dairy, to bakery and deli, and then maybe back to the produce section again if I forgot something.  I avoided anything obviously processed and viewed anything with an “organic” label with suspicion.  Because as far as I could see, the only thing different between something labeled organic and something unlabeled as such seemed to be the price.  While the only thing I spend my disposable income on is books, I’m still watching my pocketbook as I live on a grad student’s stipend.

Going to the local farmer’s market, I thought, wasn’t all that different than going to the grocery store except for where my money was going.  I viewed choosing the local farmer’s market as more of a political and social choice than a nutritional one.  Also before reading Pollan’s book, I also had the opportunity to visit more than one industrial dairy farm (although invariably for research rather than as some random observer).  As someone who has grown up in the suburbs with little contact with farm life until moving to Idaho, it was certainly an eye-opening experience for me.  A dairy cow living on a muddy feedlot has an observably different quality of life than the groomed and scrubbed prize cattle at the local county fair.

In The Omnivore’s Dilemma, Pollan divides food into four groups that represent how we modern humans get our food rather than the textbook food groups arranged in the oft-cited pyramid.  The first group is industrial food which ultimately is derived from corn and petroleum products.  Corn is used to make the feed for industrially raised livestock and is chemically decomposed and recomposed to make the processed foods found in the middle aisles of every supermarket.  Oil is expended to transport this food all over the world.  And because this is industrial food, everything is treated as just another cog in the machine–to most people all of this is out of sight and out of mind.

While I appreciate the sentiment that the industrial food system has the capacity to “feed the world”* on an affordable level, I am skeptical about industrial food being the sole solution to the world’s food problems.  Pollan points out several problems about this type of food.  While it can provide the calories, it may not provide the correct nutrients.  It relies on petroleum products which is not a renewable resource.  And it encourages monoculture which is a ripe breeding ground for disease.  (As a microbiologist, this is of intense interest to me.  There used to be a scare about hospital-acquired illnesses, but as hospital containment procedures have improved, more and more antibiotic-resistant diseases have started coming from the community. And some evidence so far points to industrially raised livestock as one of the origins for these community-acquired diseases.)

The second group of food is organic food.  But as Pollan investigates, the term organic is rife with contradictions.  To the lay public, “organic” conjures up illusions of wholesomeness.  In reality, much organic food is little different than industrial food except for the fact that pesticides and antibiotics aren’t used and the diets of livestock are supplemented with something other than corn.  Perhaps there aren’t any unnecessary and unhealthful chemicals in this food, but is the nutrient content any different?  Does it really put a dent in monoculture?  And ethically, are organic livestock having a better quality of life than their industrial brethren–or is it actually worse since they don’t have antibiotics to stave off infections?

The third group is locally grown food.  Here, Pollan visits Polyface Farms owned by Joel Salatin.  During Pollan’s week-long visit, he observes how Salatin maintains a thriving farm on what used to be barren land.  In order to maintain a productive farm, Salatin makes use of the interdependence between organisms.  One such cycle that Pollan describes is that of the cow eating the grass.  The resulting cow poo is where flies lay their eggs.  The growing maggots feed the chickens which then excrete nitrogen rich waste that fertilizes the grass that feeds the cow.  The only problem with this sort of food is quantity and distribution.  This sort of system is difficult, if not impossible, to scale up if you want to feed a lot of people.  And even if production could be ramped up, use of petroleum products for distribution is going to be unavoidable.

Hunted and foraged foods make up the final food type.  This is the most impractical kind of food.  While it is certainly far more natural for our bodies and elicits a metaphysical closeness with what we’re eating than something pumped up with antibiotics, it is also far more difficult to obtain.  And if everyone were to revert to hunting and gathering, the current ecosystem would surely not support all of us.  Pollan advocates trying this sort of food only occasionally.

Out of these four kinds of food, Pollan seems rather enamored with this idea of locally grown food despite the inherent problems.  He sees the solution as supporting local farmers and decentralizing the food system.  For city dwellers who will find it difficult to access local farmers, he proposes that perhaps inner city co-ops are the answer.  In a lecture I attended in January, Pollan gave more details about what he envisioned as a local food revolution.  He cites Will Allen in Milwaukee who has managed to apply techniques similar to Joel Salatin’s in inner city greenhouses.  To the problem of scaling up production, Pollan mentioned a new sort of crop rotation technique being practiced in Argentina which helps increase production while eliminating fertilizer and herbicide use.  And, of course, there is the idea of changing legislation so that it will favor food diversity over industrial monoculture.

But what, if anything, does any of this have to do with the titular problem?  The problem is this: because we in the west are inundated with an abundance and variety of food, we have gone back to square one in trying to determine what is good to eat.  Other cultures have solved the problem by creating national cuisines–which traditionally have eliminated any problems with food.  But here in industrialized America where dietary science has taken over the menu and melting pot multiculturalism has multiplied food choices by a gazillion, people have become schizophrenic eaters.  And all of this is compounded by the modern invention of industrialized food which may taste good but is a virtual nutritional black box.

Pollan advocates simplicity, the advice to eat mostly fruits and vegetables but little meat.  I think this answer may be a little too simple.  Even though this is getting back to the basics, this may be hard, monetarily hard, for some people.  Due to the way food production and distribution have evolved, cheap food may not necessarily be good food but it may be the only kind of food people can afford if they have limited dollars to spend.  It’s a bit of a catch-22–buy the “good” food and starve because you can’t buy enough of it or buy the “not-so-good” food which might give you expensive medical problems later on.  Maybe people can enact legislation to help change food production and distribution patterns in order to make good food more available–but I think one needs to think in realities, too.  Even if the laws and pricing changes, it’s going to take a while to make those changes.  So what are people to do until then?

I also believe we have to consider cuisines that have been developed by various cultures for hundreds or even thousands of years–because if these dishes didn’t make our ancestors keel over (and maybe even extended their lives instead), surely there is something good about those foods.  Although my opinions about food are little changed after reading this book (I still subscribe to the common sense method** when choosing my comestibles), I think Pollan presents a good argument for being more cognizant about what we eat.  Because food isn’t just the numbers printed under the nutritional information on the side of a box.  It’s also about where it came from and how it got on your plate in the first place.

*I actually heard this coming from a guy who is employed by Monsanto. I wondered if this was some sort of motto the company wanted all of their employees to parrot or whether this guy really believed what he was saying.
**My common sense method mostly consists of avoiding processed foods and attempting to replicate the cultural cuisine aesthetics that I grew up with.

A (Tentative) Synopsis

I have a really bad track record in writing contests in which I compete with other people.  As in–I never win.  This, of course, fuels my growing belief that my writing sucks but it doesn’t stop me from entering said contests (in fact, I submitted a short story just this morning to yet another contest).  I guess I’m just a masochist for rejection.

Last year, Nanowrimo introduced a program called 30 Covers, 30 Days in a which a lucky participant gets a cover for their Nanowrimo project designed by a professional.  I submitted a synopsis last year which, needless to say, did not get picked.  But since Nanowrimo is doing this program again this year, I figured, what the heck, I’ll try again.

So here’s my tentative synopsis for Dining with Small Monsters which is shaping up to be a comic culinary sci-fi mystery:

* * *

The Galactic Broadcasting Corporation (GBC) is the largest broadcaster in the galaxy.  But that doesn’t mean that it’s making any money.  With the advertising credits for its long-running staples of reality shows, interactive gaming, and crime dramas plummeting, Nigel Mot–the CEO of the GBC–hatches a daring plan to resurrect the network: start a travel documentary series featuring the origin of the foods served at the coronation banquet of Nigel’s doomed ancestor, the 42nd Emperor of the Andromeda Galaxy.

Euphrosyne “Euphie” Tanaka-Teng, still smarting from getting kicked out of the prestigous Andromeda Film School, has seen the writing on the wall–the ratings for the reality show she’s working on as a temp is at an all time low.  But before she can update her resume, she’s offered a position as assistant holographic projectionist on a new show called “Dining with Style in the Delta Quadrant” hosted by none other than the CEO of the GBC.  Thinking that this is her big break, Euphie joins the documentary crew, only to realize that her co-workers’ only experience with documentaries is watching ancient footage of David Attenborough in The Tribal Eye and reading reviews of This Is Spinal Tap.

But while Nigel, Euphie, and the crew eat and film their way through the Delta Quadrant, they start critically examining his ancestor’s banquet menu and uncovering evidence that the accidental food poisoning of the 42nd Emperor may not have been so accidental after all.  And as strange, unexplainable things start happening on set, it becomes clear that someone will go to any cost to stop the documentary team from finding the truth about the emperor’s death which could have a devastating impact on the aristocracy governing the galaxy.

* * *
As this year’s professional cover designer is yet another person who specializes in literary fiction, I have pretty much zero confidence that I’ll get picked this time around, too.

Make Your Own Must-Read List

In The Freedom World, Jessa Crispin argues against reading books that everyone else has read–that there’s no such thing as canon or a must-read.  In many ways, I agree with her.  After all, we all have different experiences and backgrounds.  We all have limited time and it is better to use it well to read the books that we want to read rather than what other people tell us to read.

But while I agree that there is little utility in putting stock in what other people tell us is a “must-read”, I think there is use for a literary canon–not as something to be forced on people–but as a tool for teaching.  Sure, we can all say that we should just read for our own pleasure, but I think there is merit, too, in picking a few books that everyone else has read and analyzing the heck out of them.  It’s a way of trying to get into someone else’s point of view in how to approach reading, another way to empathize.  I’m not saying that we should do this all the time, but that we should all do this at least once to see what the other side is all about.

That said, though, reading books (especially fiction) is not the same as studying algebra, the periodic table, the organelles of the cell, or even grammar.  Fiction is an art form and thus subjective.  Let’s say you’re someone who hates Picasso but loves Thomas Kinkade.  While there are, of course, critics who may question your taste–no one in their right mind would attempt to make you live in a house filled with Picasso paintings if you preferred Kinkade instead.  Similarly, no one should try to force you to read a book you don’t want to read because it’s all about taste.  And there’s no such thing as “correct taste.”

So if everything is subjective, then why on earth does everyone else want to read a must-read or something on the supposed canon?  I think that while everyone does have their own unique experiences, people still crave a shared experience.  But how you go about getting that shared experience is another matter.  Human beings are social creatures–and many find it in their nature to try to fit in with everyone else by doing what everyone else is doing.  And as for the rest of us?  Well, I can speak for myself.  I read what I want to read.  But after I’ve read the book, I tell other people about it or stumble upon other people who happened to have read the book.  And that is the crux of it.  Are you a follower or someone who forges their own path, hoping that others will follow after them?

The Right Way to Review

I have many ambivalent feelings about reviewing books.  It used to not be this way.  When I first started reviewing books I’ve read on the blog, it was pretty straightforward: write about what the book was about without giving everything way, say whether or not I liked it, and why (maybe).  I’m not sure if anyone even read those reviews (aside from the occasional author who stumbled onto them), but that’s not even the point of why I wrote the reviews in the first place.  Writing reviews makes me feel productive–that I got something out of the hours I spent reading even if it was something negative.

And then I started reading about other people’s policies on writing reviews.  Some people only write reviews for books that they like because they think that negative reviews are counterproductive and mean.  Other people make it a policy to never write reviews because they’re authors and they fear backlash from other authors.  There are those who simply say if they liked a book or not.  And others who explicate a manuscript within an inch of its life.  Some people have rating systems.  And others don’t really have a review when they’re writing a review–rather, it’s just a summary.  All these different methods for reviewing books made me question if my own reviews were really doing the books justice.

In general, I don’t think I’m a very good reviewer.  This is, I suppose, where my insecurities come into play.  My background is science-oriented.  I haven’t had much training in critical literary analysis other than one elective class in 18th century literature that I took just for the hell of it when I was an undergrad.  Of course, after years of writing occasional reviews for the blog, I think I’ve improved somewhat.  But still, sometimes I wonder if anyone reading my opinion thinks it’s any more or less valid than someone who has a Ph.D. in English literature.

Insecurities aside, intellectually, I think every reader has a valid opinion even if it’s just something simple like “I loved it!” or “I hated it!”  Each reader brings in his or her own set of experiences and beliefs thus creating unique relationships with the books that they read.  The book itself may be a record of an author’s expression but the only thing that is really meaningful at the end of the day is how the individual reader interprets it.  While others might find the reader’s interpretation useful, this is mostly a side benefit.  The interpretation itself is intrinsically personal.  Whether or not it is articulated into so many words is not a requirement for that interpretation to be valid.  So when a book club personality from Salon voiced her dislike of certain types of reviews (via Quill & Quire), I had an immediate impulse to defend the succinct and supposedly lazy reviewer.

For the last fiction book I reviewed, I explicitly stated that I found the main protagonist “unlikeable”.  According to the book club personality, then, my descriptive shorthand is an indication that I’m just another bully in the literary playground who is simply being mean.  They believe that “unlikeable” is a code word for complaining to the author that their characterization is bad.  Literary critics, I think, read too much into the reviews of the ordinary reader.  In my case, “unlikeable” was a descriptor and not a complaint against the author.  In fact, I ended up liking the book even though the character was “unlikeable” because I saw a reason for why the character was written that way.  Another reader may have had an entirely different experience–perhaps he chucked the book out the window primarily because the character was unlikeable regardless of the rest of the plot.  But that doesn’t make that reader’s view illegitimate either.

People who make their living as book critics and reviewers have been regarded by the rest of society as the arbiters of literary taste.  They’ve been the ones who have set the gold standard for what makes a good book and by association how one should examine and opine about such books.  But with the advent of the internet, where anyone and his dog can post reviews online without the repercussion of an English teacher standing over them to rap them on the knuckles for not following proper form, the audience and respect that the critics have previously enjoyed are slowly eroding away to a more democratic platform for opinion and analysis.  It is no wonder, then, that there are those who cry foul when someone types out a one-line review just like every other empty-headed tweeting bird.

Grumbling About Writing Advice

From Ignore anyone who tells you to write, write, write! (via @simplywriting):

What I’m saying is that I don’t care what you do, just don’t think that to ‘be a writer’ you have to grind yourself into the ground, because you don’t. You have to work hard, yes. But you don’t have to spend every waking hour trying to do what some blogo-nitwit on the internet (including me) says you should be doing.

And if someone questions your commitment because you chose to watch X Factor or American Idol rather than attempt to beat your writer’s block with an hour and a half’s worth of horrible, depressing, turgid, ultimately unusable writing, please tell them to shove their judgemental claptrap right up their bum.

Writing for the sake of writing is a waste of time.

Something about this article completely tweaks me the wrong way.  Maybe it’s because the article tries to play into the concept of writing as an artistic endeavor rather than a job.  If it’s an artistic endeavor then people have the notion that it’s okay to take large swathes of time off to recharge your creative energies and be eccentric–because that’s what artists supposedly do.  If writing is a job–then you write whether you feel like it or not.  In reality, it is a mix of the two.  While there is an artistic and hobbyist aspect to writing, for some people writing is their livelihood.  If writing is going to literally put food on your table, then you can’t dither your time away doing something unproductive.

Of course, there is no one correct way to be a writer.  How one writes is up to the individual.  But all writers do have something in common–they write, regardless of whether the regimen is the 8 to 5 sort or the last two hours in the day after a long marathon of reality TV.

I don’t particularly agree with the idea that “horrible, depressing, turgid, ultimately unusable writing” is “a waste of time”.  Bad writing expresses something even if it does not express it well.  It is also practice.  Writing is like a lot of other skills.  In order to get better at it, you have to practice.  A concert pianist doesn’t get to his level of caliber by just sitting around and only playing when he feels like he can play it perfectly.  Similarly, a writer should not wait until he or she is in the frame of mind for perfect prose.  There’s no such thing as perfect writing.  And even if you settle for just good enough–even that requires practice.  And if it’s still not good, well, that’s what revision is for.

Ultimately the backlash against this “write, write, write” mantra is misguided.  The only reason people continually harp on writing is that some people who call themselves writers simply don’t write.  It’s okay if someone who has been writing a couple hundred words each day suddenly hits a block and just needs to take a break for a week in order to recharge.  But if your brief vacation from writing stretches out to several years, I begin to seriously doubt that it’s writer’s block.  Instead, I begin to think it’s procrastination.

I’m Not Really Bob, But Call Me Bob Anyway

The author of a James Tiptree Jr. article says this:

The taking of a pen name is in many ways a frightening process. It is easier than we believe to become something other than ourselves.

I first came up with my current pseudonym/internet handle during high school.  I had worked for the school’s student newspaper as a news editor which was fine with me. I wanted to see what this journalism thing was all about since everyone else who was of a writerly bent seemed so enthusiastic about it. All I can say now in hindsight is, at least they didn’t make me write the sports stories. Anyways, during one staff meeting, the editor-in-chief came up with this bright idea about starting a poetry section.  The inaugural column would come from in house and then be later opened up to submissions from the student body.

This, of course, spurred everyone on the newspaper staff into a poetry writing frenzy.  I recall writing quite a bit of poetry while I was a teenager, but even then, deep down I knew that everyone else would probably call it overwrought dreck.  That was why I came up with the pseudonym and insisted to the editor-in-chief that if she decided to publish any of it, the pseudonym byline was going with all of that angsty and nihilistic free verse, not my real name.  Fortunately, she ended up using the sports editor’s love poetry instead. (Teenage love poetry is such a bad idea*.  The fledgling poetry section crashed and burned  and I think the editor-in-chief learned her lesson.)

There was one point before I started blogging that my pseudonym wasn’t me.  I was dabbling in a post-apocalyptic sci-fi idea where my pseudonym was a character making a precarious living exploring a decaying urban underground filled with mutated monsters.  Sometimes I would write bits of dialogue or quotes from the character on the whiteboard hanging outside my dorm room door.  My dormmates would find the quotes disturbing–because like all that horrible teenage poetry, the character was of a rather angsty and nihilistic sort.  The story ended up going nowhere.

Then came the blog and the pseudonym came with it.  I could have come up with a new pseudonym.  Maybe even a pseudonym that could have sounded like a real name.  But I didn’t.  I like to think it was due to laziness.  My subconscious might say that I clung to the pseudonym because it represents the cynical and not particularly reticent side of me.  Whatever the case, I’m still using it as a byline for churning out prose of the fiction and non-fiction variety that almost no one reads.

Some people argue that using a pen name is inauthentic and cowardly.  Under some anonymous handle, a person would feel the freedom to say anything–including flinging vitriol like a souped-up Linda Blair–without repercussion.  This, I think, is patently authentic rather than the other way around.  And perhaps it is cowardly, but it is the sort of cowardice born from society’s expectation for polite conversation and the realization that there are just some things that are better expressed anonymously.

I would like to think that I come across as authentic on this blog even though I do not post about everything that pops into my head**.  But I think it is true, too, that there is a line between being authentic and becoming “something other than ourselves.”  In some ways, my internet self is a different person.  From my online words alone, there is a certain picture of me that is projected to the reader because those very words have been thought over, chosen, and posed on the screen to present a certain image.  If the reader were to interact with me in real life, I am sure a different picture would emerge no matter how authentically I write.

But then again, I think this is true for anyone who has done any sort of interaction online, including anyone blogging under their real names.  You might post a picture of yourself–or even a video–so that your readers would know exactly what you look and sound like, but it’s still posing. The unavoidable distance between the poster and the reader is the inherent barrier of the internet.  While words and the choice of them may be revealing, in the end it’s the equivalent of a pseudonymous personality.  It might not be deliberately created by you, the real-named blogger, but it’s there nonetheless in your readers’ imaginations and those electronic squiggles that look suspiciously like Times New Roman.

*My advice: Don’t do it! And if you do, make sure it never sees the light of day.  You’ll thank me later.
**Twitter is more amendable to that.

Po-Moing the Pointy-Eared

Over the weekend, I read Brandon Sanderson’s essay Postmodernism in Fantasy.  The gist of the essay is this: Sanderson started out trying to write fantasy that had a new twist but in the end just wrote what he liked.  While being fresh and thinking out of the box might expand the genre, it may also alienate readers.  A postmodern take of fantasy will only end up relying too heavily on the original fantasy in the first place and will fail to stand alone as a work.

As a writer, I don’t think there’s anything inherently wrong with trying to write postmodern fantasy and subvert all the well-known tropes.  If a writer wants to construct something that they consider different, then they can have at it.  There is no correct way to write a book nor is there a correct reason for anyone to write a book.  But as a reader, there’s definitely a difference between a story that succeeds and a story that fails even though everything else looks technically correct.  That’s because the reader acts more like an ice cream taster than the visiting food inspector.  The inspector doesn’t care what flavors the ice cream parlor carries as long as everything is sanitary.  However, the chocolate-loving customer would come away dissatisfied if all the parlor carried were weird, unappetizing flavors like mahi mahi, pickle, and cream of mushroom.  So too will most fantasy readers react in this way.  They might marvel at the construction of a cream of mushroom fantasy with all its literary po-mo trappings, but what they’d really prefer is the chocolate.

The real difference is not the dichotomy of postmodern versus traditional but rather the intellectual versus the emotional.  Yes, yes, pretty much every non-romance writer (and reader) tries to ignore the dreaded “E” word, but there it is.  When people pick up a genre fiction book to read, no one expects to be intellectually stimulated.  Well, they could be and readers aren’t an unintelligent bunch (generally), but that is not anyone’s expectation.  What they do expect is good storytelling with some emotional payoff.  A story that makes one think but leaves one cold is not going to have the same impact as something that gives the reader a gut feeling of satisfaction.  By the same token, then, why can’t writers write something that is emotionally satisfying rather than the intellectual equivalent of throwing everything and the kitchen sink into the ice cream maker just to create something new?

I find that throwing oneself completely into one direction or another is a dangerous path to take.  Writing the same thing over and over again without any intellectual challenge in an attempt to fulfill some emotional need is a quick way to kill the enthusiasm of the writer and the reader.  There has to be some sort of compromise between the intellectual and the emotional.  If it’s only intellectual, people will only recall it as something difficult and unpleasant to read.  If it’s only emotional, the readers might be happy but with nothing to engage the mind, it may be easily forgotten.  Successful books, for me at least, contain healthy doses of both these elements.

Imagining Happiness in the Cards (And Probably Getting It Wrong)

You’d think that after reading The Art of Happiness, I’d completely swear off reading any books about happiness.  But no.  I generally do not believe that one bad book will completely invalidate an entire genre even though it might take me a couple years to reach for another one.  In this case, after reading Carl Zimmer’s Microcosm*, I went looking for similar recommended books online and this one was on one of those lists.  So one could say, yes, I did literally stumble upon happiness.

Stumbling on Happiness by Daniel Gilbert** is more like a science book disguised as a self-help book.  Gilbert’s writing style is humorous and easy to digest.  But is it going to actually help anyone achieve happiness?  Maybe not.  But it would certainly make one more aware of what sorts of tricks the mind plays in order to adjust your emotional well-being.  Because the book is less about happiness and more about how crummy our brains are at trying to imagine the future.

Apparently our brains are so good at imagining things that it does so without our awareness.  Take, for instance, our vision.  There’s a blind spot in our vision but we don’t have holes in our visual field because our brains automatically fill in the spot with a reasonable guess of what should be there.  Our memories, in actuality, are pretty spotty.  When we retrieve a memory, it is mostly the key points that we remember (such as: tuna sandwich, new lunch joint, bad) and the details that we fill in at present (the sandwich smelled like a rotten egg and looked like gray goo, the bread tasted like cardboard–and the waitress was mean, too).  There are experiments that verify this.  But while, in the normal course of things, this doesn’t seem like much to worry about, this does particularly affect our perception about our future and how happy we think we might be in that future.

The problem we have with predicting the future*** is that we have the tendency to focus on the big events and gloss over the details that might happen and have as significant an impact.  You might say that you will be happier in a year when you’re on a vacation in Europe compared to now when you’re sitting in a cubicle, but your brain is only imagining all the good stuff–the walks along the Seine, the abundance of beer and buxom barmaids, and marveling at ancient ruins.  You’ve failed to take into account the details–like the fact that your clumsiness will make it more likely you’ll swim in the Seine, you’ll wake up with a tremendous hangover the next morning next to some guy you wouldn’t touch with a ten foot pole, and that you’d be spending more time waiting in lines under hundred degree weather rather than snapping your camera.  Another problem is that when we think about the future, our brains take what we know most about–which is the present–and use that to generalize the future without regard to the fact that the future is under completely different circumstances.  This is why when others try to tell an unhappy person that there will be a brighter tomorrow, the unhappy person does not believe it.  An unhappy person, at that moment, cannot conceive that they will be happier the next day because their brain is taking the present as a basis to predict the future without factoring in everything else that might happen.

While all of this is very interesting, I couldn’t help but wonder how much culture plays a role in happiness.  There is one study that Gilbert describes which involves asking Asians, Europeans, and Hispanics how happy they are at one particular moment and then asking them later how happy they remember being.  Asians reported being happier in the particular moment but remember being less happy compared to their European counterparts.  Hispanics were less happy at a particular moment but remember being more happy than the Europeans.  One could speculate that the Asians and Hispanics were unconsciously conforming to cultural stereotypes–that Asians are taught to subsume personal happiness for the good of the family while Hispanics are believed to be a happier group of people on the whole.  In another study, women were found to remember more stereotypically feminine feelings and men more stereotypically masculine feelings even though during an event, they had the same sort of feelings.  So could people alter how they remember their emotional states by changing cultural influences?  That’s difficult to say since we all internalize cultural expectations even though we’re not aware of it.

Can we out-trick our brain on our way to happiness?  People who tell themselves that they’re going to be happy end up less happy than people who let things fall where they may.  We could try just living in the present, but our brains programmed to think about what may happen next–if anything, for our own self-preservation.  And even if we experience traumatic events, most of us are resilient enough to bounce back by finding some silver lining, even if it’s just an illusion****.  Clinically depressed people, on the other hand, see reality only too well.  Happiness, apparently like a lot of other things, requires balance.  In this case, you need a balance of reality and illusion in order to be healthy emotion-wise.

So how can we predict how we will feel about something in the future?  Gilbert addresses this problem and argues that happiness can be generalized.  With experiments using emotional surrogates, subjects can predict how they will feel when experiencing a particular event if they read reports of how other people feel when they’ve already experienced the event.  The only problem with trying to convince people to pay attention to other people’s experiences is that everyone believes that they are a special snowflake.  Happiness, one can argue, is a subjective emotion.  There is no way that you can say for sure that I’m happy or sad because you aren’t in my position.  Similarly, I cannot truly tell how happy anyone else is because I am not in their situation.  The experiments with the emotional surrogates proves that this thinking is wrong. But people feel better in believing that they are unique rather than just average so they will use their faulty imaginations instead of relying on tried and true advice.

I personally feel that generalizing happiness is a very precarious position to take. I am reminded of some other research which dealt with how volunteers in a test game dealt with the notion of sharing.  Volunteers from the west react very differently than volunteers from elsewhere in the world.  Does this mean that one group is behaving normally while the other group is doing something weird?  Or is something else happening?  While you can do all the psychological experiments you want on college students in order get some sort of “norm”, it would be disingenuous at best to apply this norm to the rest of the human population–most who are definitely not college students.  A lot of things in our lives are affected by culture and environment.  As a result, we shouldn’t think that happiness is any different.

*Excellent book, by the way, if you want to learn about the history of E. coli and modern molecular biology.
**From my brief online searching, I don’t think he’s related to the author of Eat, Pray, Love except that they both appeared on a PBS documentary.
***This can be illustrated by the fact that forecasters, sci-fi writers, and psychic hotlines, more often than not, get it wrong.
****This is also why, as Gilbert describes it, the brain “cooks the facts” in order for you to continue to believe what you prefer to believe.  (This is probably also why I feel that it is hopeless to argue with certain people no matter how much evidence to the contrary I give them.)