Sunday, March 31, 2019

#63 why time flies, number of friends (again) and how long it takes to make them, space travel, a useful word




Good afternoon.

            I have been resisting the temptation to get out into the yard and gardens to start cleaning up the detritus from winter.  It's probably a little early to do so when there are still small mounds of snow and ice here and there.

            Why time flies.  An interesting explanation.  I'm not sure I buy it, but here it is.  A professor of mechanical engineering at Duke, Adrian Bejan—so prima facie, at least, the source is perfectly reputable—argues that the reason time seems to go by faster as we get older is that we cannot process images as quickly with our brain as we get older.  So time speeds up because of physics.

"People are often amazed at how much they remember from days that seemed to last forever in their youth," said Bejan. "It's not that their experiences were much deeper or more meaningful, it's just that they were being processed in rapid fire."

            The gist of the argument is that the nerves in our brain grow and grow, so it takes longer for an image to make its way through our neural network.  In addition, the neuron paths deteriorate as we get older, so the electrical signals (that are part of the creation of images in the brain) get slowed.  Bejan points out that infant eyes move much more quickly than adult eyes—and thus absorb more information faster.  "The end result is that, because older people are viewing fewer new images in the same amount of actual time, it seems to them as though time is passing more quickly. . . .  'Days seemed to last longer in your youth because the young mind receives more images during one day than the same mind in old age.'"

            I suppose that's as good an explanation as any.  We who are past 40 have certainly experienced the phenomenon; the question is why the perception.  One wouldn't have expected the answer to come from mechanical engineering.

* * *

            Some time ago I noted the work of anthropologist and evolutionary psychologist Robin Dunbar at the University of Oxford.  He hypothesized that there are limits to the number of people that most of us have in various degrees of friendship.  Here's an excerpt from a New Yorker article about the Dunbar numbers:

The Dunbar number is actually a series of them. The best known, a hundred and fifty, is the number of people we call casual friends—the people, say, you'd invite to a large party. (In reality, it's a range: a hundred at the low end and two hundred for the more social of us.) From there, through qualitative interviews coupled with analysis of experimental and survey data, Dunbar discovered that the number grows and decreases according to a precise formula, roughly a "rule of three." The next step down, fifty, is the number of people we call close friends—perhaps the people you'd invite to a group dinner. You see them often, but not so much that you consider them to be true intimates. Then there's the circle of fifteen: the friends that you can turn to for sympathy when you need it, the ones you can confide in about most things. The most intimate Dunbar number, five, is your close support group. These are your best friends (and often family members). On the flipside, groups can extend to five hundred, the acquaintance level, and to fifteen hundred, the absolute limit—the people for whom you can put a name to a face. While the group sizes are relatively stable, their composition can be fluid. Your five today may not be your five next week; people drift among layers and sometimes fall out of them altogether.

            In part keying off Dunbar's work, a communications professor at the University of Kansas investigated how long it takes for someone to go from acquaintance to friend (of some degree).  On the one hand, "the more time two people spend together, the more likely they are to become friends. On the other hand, there are people we see regularly but don't consider friends. So just how many hours of togetherness does it take for an acquaintance to turn into a friend?"

            In a couple of interesting studies, Professor Hall "found that it took about 50 hours of interaction to move from acquaintance to casual friend, about 90 hours to move from casual friend to friend, and more than 200 hours to qualify as a best friend."  If people spent no more than 30 hours together, they didn't develop a friendship bond at any level.  However:  just spending time together doesn't necessarily lead to friendship.  Many, perhaps most of us, have spent many hours together with people where we work(ed) but didn't become friends with them; either we didn't like them much or (at least in my case) were happy to work with them but not interested in developing any relationship outside work.

            How people interact is also important, no surprise.  "'When you spend time joking around, having meaningful conversations, catching up with one another, all of these types of communication episodes contribute to speedier friendship development.'"  For instance, to ask someone about what's happening in their life means there's interest in keeping a relationship up to date.  We don't usually engage in that kind of conversation with many of the people we work with or interact with in other settings except perhaps on a superficial level.  We don't bother because we aren't interested.

            And the point is?  Hall argues that "you have to invest" time.  There is ample research demonstrating the importance of friendships to a healthy life.  "Having friends helps to keep us healthy, both physically and mentally. On the other hand, a lack of social connectedness is as bad for us as smoking or obesity."  (ß I don't know if that specific claim is backed up by evidence, but it makes sense to me on its face.  I know that the morbidity and mortality statistics for single older men are scary.)   So, he says, people need to make it a priority in life to spend time with people.

            I wonder how communication on social media counts.  My speculation is that it can be an additional, or supplemental, way to stay in touch with people who are already friends, but that it doesn't contribute more than an iota to the development of friendships.  Few people use social media for any exchanges of substance, so they wouldn't fall in the category of meaningful conversations.  The same is probably true for email, although that can be used for more substantive exchanges if both parties are willing; in my case, for example, when Kathy and I first met, for several months in the evenings after work we exchanged hundreds of emails on topics important to building a relationship.  Those emails couldn't substitute for personal interaction, of course, but they sure helped clear up a lot of questions anyone has in that context.  We could elaborate on our answers and exchanges in face-to-face conversations.

            The New Yorker article again:

As constant use of social media has become the new normal, however, people have started challenging the continued relevance of Dunbar's number: Isn't it easier to have more friends when we have Facebook, Twitter, and Instagram to help us to cultivate and maintain them? Some, like the University of California, Berkeley, professor Morten Hansen, have pointed out that social media has facilitated more effective collaborations. Our real-world friends tend to know the same people that we do, but, in the online world, we can expand our networks strategically, leading to better business outcomes. Yet, when researchers tried to determine whether virtual networks increase our strong ties as well as our weak ones (the ones that Hansen had focussed on), they found that, for now, the essential Dunbar number, a hundred and fifty, has remained constant. When Bruno Gonçalves and his colleagues at Indiana University at Bloomington looked at whether Twitter had changed the number of relationships that users could maintain over a six-month period, they found that, despite the relative ease of Twitter connections as opposed to face-to-face one, the individuals that they followed could only manage between one and two hundred stable connections. When the Michigan State University researcher Nicole Ellison surveyed a random sample of undergraduates about their Facebook use, she found, while that their median number of Facebook friends was three hundred, they only counted an average of seventy-five as actual friends.

There's no question, Dunbar agrees, that networks like Facebook are changing the nature of human interaction. "What Facebook does and why it's been so successful in so many ways is it allows you to keep track of people who would otherwise effectively disappear," he said. But one of the things that keeps face-to-face friendships strong is the nature of shared experience: you laugh together; you dance together; you gape at the hot-dog eaters on Coney Island together. We do have a social-media equivalent—sharing, liking, knowing that all of your friends have looked at the same cat video on YouTube as you did—but it lacks the synchronicity of shared experience. It's like a comedy that you watch by yourself: you won't laugh as loudly or as often, even if you're fully aware that all your friends think it's hysterical. We've seen the same movie, but we can't bond over it in the same way.

With social media, we can easily keep up with the lives and interests of far more than a hundred and fifty people. But without investing the face-to-face time, we lack deeper connections to them, and the time we invest in superficial relationships comes at the expense of more profound ones. We may widen our network to two, three, or four hundred people that we see as friends, not just acquaintances, but keeping up an actual friendship requires resources. "The amount of social capital you have is pretty fixed," Dunbar said. "It involves time investment. If you garner connections with more people, you end up distributing your fixed amount of social capital more thinly so the average capital per person is lower." If we're busy putting in the effort, however minimal, to "like" and comment and interact with an ever-widening network, we have less time and capacity left for our closer groups. Traditionally, it's a sixty-forty split of attention: we spend sixty per cent of our time with our core groups of fifty, fifteen, and five, and forty with the larger spheres. Social networks may be growing our base, and, in the process, reversing that balance.

            I suppose none of this is revelatory, but I've thought about it in connection with class reunions.  I'm a member of the committee planning our 50-year reunion this coming September and I've attended most of my high school class reunions (every five years since we graduated).  I concluded early on that they're a lousy place to re-engage with people or to make new friends.  Hall's work demonstrates that a 10-minute chat with a classmate at a reunion won't lead to a friendship at any level unless both people follow up repeatedly.  The parting lines "we'll have to stay in touch" or "we'll have to get together" may be a disguise for exiting a conversation with no intention of getting in touch with the person—but even if they are genuinely meant, unless they are followed up actively, no matter the good intentions, they will be meaningless.

            Given that state of the relationship world, I've been nudging classmates together in casual lunches in various groupings in the expectation that some of them could become friends, either with me or with each other.  If it's true that it takes 50 hours just to move from acquaintance to casual friend, it's an uphill battle in terms of reconnecting at recurring monthly lunches, much less at a one-time reunion event.  I wonder if that 50 hours can be reduced if (1) both people decide that they want to move beyond acquaintance toward a friendship at some level, and (2) common background demographics pave the way, at least in part (e.g., high school classmates shared a school experience and, at least for those of us in public schools in the 1950s and 1960s, came from the same geographic area of a city). 

            In any event, aside from reunions and such, the work of Hall and Dunbar provide an interesting way to think about your friendships and how you spend your time.

* * *

            Awhile ago Elliott sent me an email out of the blue. 

Sitting in bed at about 1 a.m. wondering:

We know that in the vacuum of space you can accelerate to a speed of X, shut off your propulsion system, and you will continue to drift indefinitely in a straight line at a speed of X. At least until you collide with something or enter a gravitational field that alters your trajectory. Because in space there is no resistance or friction. So why then is it not possible to leave the gas on and accelerate all the way to light speed?

In theory it is possible, or at least partially. You could, with an indefinite fuel supply, accelerate all the way there. Problem is we don't have effective enough fuel to realistically accomplish this because light speed is really fast and takes a long time to accelerate to. Which I knew. There's also that issue with physics about what happens to solid matter as it approaches light speed which I don't completely understand but I do know exists and is a barrier. Though in this case not actually relevant to my particular question.

But in my late night internet browsing I discovered there's another barrier I didn't know about which seems obvious now: Space isn't really empty. Not the obvious planets and stars and whatnot. But the 'vacuum' of space is not entirely empty. It's full of nearly-microscopic particulate matter left over from any number of things, as well as trace quantities of gasses and the like.

Normally these bits of dust don't factor into equations of human made objects in space. But as you approach even a fraction of the speed of light, suddenly those invisible gas molecules and grains of space sand become bullets relative to you. At a certain speed you are effectively ramming your space vessel into a constant and endless torrent of machine gun fire, even when flying through 'nothing.' So even if you calculated a path around any asteroids and moons you would still be ripped apart by real, physical objects long before you ever got to the point where the density of your matter and all that Einsteinian stuff becomes relevant.

I don't know what normal people think about when drifting to sleep but this is me on a regular basis.

            I told him that I ponder much more mundane matters, like the disposition of rocks and minerals and where to spread the mulch when spring appears and I can work in the gardens.  His discovery, however, does suggest that inter-galactic travel, if it ever happens, will have to be via some undiscovered mechanism that doesn't take thousands of years or subject a ship to machine-gun fire from microscopic pieces of matter in space.

* * *

            I and many of the people I know are suffering from Weltschmerz.  The term seems to have variable definitions; the two that I kenned are "a mood of weariness or sadness about life arising from the acute awareness of evil and suffering" and "mental depression or apathy caused by comparison of the actual state of the world with an ideal state."

            But if I exercise the privilege I have of being able to ignore the world for periods of time, then I am reasonably chipper.  I hope you are as well.

-- Gary

Wednesday, March 13, 2019

#62 the past biting back?, 2 useful words, a bit more on genetics & personality, bereavement effect on health




Good morning! 

            In the course of an email exchange on another topic, a friend wrote to me with an insightful observation.  "I am torn over the black face fiasco in Virginia - the 80s were a time of not much sensitivity. It was a poor choice, to be sure, but to blow up the careers of anybody who did something like that seems over the top too. It makes me think that 15-20 years from now as this current generation of people that have thousands more photos and videos of themselves move into adult and high powered jobs - how are they ever going to get past something they did in their youth? Aren't we potentially limiting the available pool of decent public figures if you have to have people that were so socially recluse they never left their house, thus never offended anybody - or were rich and powerful enough to figure out how to delete all electronic images of themselves?"

Setting aside the case of the accusations against the Lt. Governor, which are different and may involve criminal actions, it does seem to me that people can grow in their views and attitudes over 35 years.  I certainly have.

            As for what Millennials will face, the Washington Post had an article a couple of days after my friend sent the message.  Maya Kosoff, the author, argues that her generation "grew up with the expectation of constant surveillance."  Her parents were like me:  "Soon after I joined Facebook as a high school freshman in 2006, I received a stern warning from my parents and my school's guidance counselors:  Everything you post on the Internet is there forever.  If you want to eventually get into college and have a job, you should be careful what you put online."  Some say that millennials, having recorded much of their life, will surely encounter difficulties when an unsavory or unwise event surfaces.

            Kosoff thinks not. 

Our parents needn't have worried. We've always known more about managing appearances than older generations ever will.  Having grown up with the Internet, we knew that the things we put online were potentially permanent and that, inevitably, someone was watching. We internalized its omnipresent logic of surveillance, crafting our behavior and our virtual selves in accordance with the knowledge that someone, somewhere might one day judge us. Far from dooming us, our comfort with social media might make us better political candidates.

She, however, was and is highly savvy about the implications of what is posted online.  I doubt that all or even most millennials are that perceptive. 

She even goes one step further.  "Millennials are mocked for taking selfies or posting pictures of our meals, but those habits speak to our interest in crafting an image of our lives for the people who are already watching — and sometimes waiting to pounce."  To what extent, do you imagine, we all "craft an image," even if subconsciously or unintentionally, when we use social media?  Kosoff also points out that some recent social media are built to disappear; after 24 hours, posts on Snapchat and Instagram vanish.  There is also widespread understanding that certain social media companies misuse or abuse personal data drawn from posts, so people are wary about posting much substance and deleting what went before.

            She concludes:  "the common wisdom that constant digital surveillance will make it harder for us to run for office looks dubious. Instead, growing up with an understanding of the Internet's immutability may make us more scandal-proof than our predecessors."

            I guess we'll see.

* * *

In the "love of words" category, here are two that are timely for many.  Technically they apply to birds, but as in the evolution of all language, their use has spread.  One is "nidifugous" (ny-DIF-yuh-guhs), which is "well-developed and able to leave the nest soon after hatching."  Its opposite is "nidicolous" (remaining with parents for a long time after birth).   So, as the author of A.Word.A.Day points out, "if your adult child suggests living in your basement, you could simply say, 'Don't be nidicolous!"

* * *

            A brief return to genetics and personality traits.  My good friend (and Professor of Biology at the University of Oregon) Nathan Tublitz wrote to me in response to my short disquisition on personality and genetics.  He offered three amplifications that help to understand the phenomena.

1) THE GENETIC COMPONENT OF HUMAN TRAITS IS VARIABLE DEPENDING ON THE SPECIFIC TRAIT. All human traits, including components of personality, appear to have a genetic component. What is important to note here is that the relative impact of genetics on specific traits is highly variable, ranging from very minor to 100% (the latter in the case of eye color). Because of this variability and because we have not yet identified the genes involved in individual personality traits, it is not yet possible to draw firm conclusions about the impact of genetics on specific traits. Research that purports to draw such conclusions has to be viewed as highly speculative.

2) A 50% GENETIC COMPONENT OF A TRAIT MEANS THERE IS A 50% ENVIRONMENTAL COMPONENT TOO.  A 50% genetic contribution to an individual personality trait implies that 50% of that trait is shaped by environmental factors (which to us biologists, also means the internal environment including hormones). It is worth noting the obvious but not often stated point that the environmental 50% will also have a substantial impact on a trait.

3) GENES  ARE NOT STATIC. The effects of individual genes are neither binary nor static throughout the lifespan of an individual. Some genes only turn on once (for a specific period of time). However many genes turn on and off multiple times during the lifetime of an individual, and the downstream effects of an individual gene is not always the same each time it is activated. Moreover, the triggers for gene activation/inhibition are often environmental and may be different each time a gene is activated/inhibited. Thus the genetic contribution to specific personality traits (and behavior) is almost certainly variable throughout our lifetimes and not fixed as is often assumed.

Those who want simple answers to questions of human personality and behavior will undoubtedly be disappointed by these points. However those of us who study these issues continue to be fascinated and indeed awed by the enormously complex interactions between genetic, neural and environmental mechanisms that underlie behavior of all organisms including humans.

            Thank you, Nathan.  I learned from this!

* * *

            Because, for obvious reasons, bereavement has been on my mind in recent months, I found this pertinent (and troubling)—and it is or will be an issue for all of us at some point in our lives, especially as those of us in the Baby Boomer cohort reach old age.  There was recently an interview in nextavenue with Professor (and M.D.) Toni Miles, director of the Institute of Gerontology at the University of Georgia.  On reflection, I realized the evidence she presented is not surprising, but it is nonetheless a cause for concern.

            The gist of the research findings is that "a significant loss is deadly serious, putting the grieving at higher risk for serious health problems, and even their own premature death."  Miles said that "critical losses are destabilizing and accumulate over time. The death of a significant loved one — by that I mean a parent, spouse, sibling or child — increases your own risk of dying."  She reported that she was not referring to an elderly couple where one dies shortly after the other (aka, in some cases, takotsubo cardiomyopathy or "broken heart" syndrome); she used data from those 50 years old or older who "lost someone close in the last twenty-four months. At a population level, those people are two times more likely to die over the course of a lifetime than someone without that loss."  (Either she spoke loosely or the person recording the interview was careless with language.  Everyone is likely to die over the course of a lifetime!  To put it correctly, presumably she meant " . . . two times more likely to die early or prematurely over the course of a lifetime. . . .")

            The greatest impact on life expectancy is within the first two years after the death of someone close.  More frightening, "the research also shows that the younger you are when you lose someone important to you, the worse it is for your health. The elevated mortality risk for children who lose a parent goes up fivefold."  Niles also reports that "in our models, losing a child, even an adult child, statistically carries a high risk of morbidity."  So far I'm still here, but given these findings, I do wonder if Krystin's death will have a long-term effect on the length of my life.  Unfortunately, it probably will:

            [Interviewer]  It sounds like time, in fact, does not heal these wounds.

[Niles]  That's true. The risk never goes away; it does not return to the expected level of the general population that has not experienced such a loss. Even when you adjust for age, the risk stays higher. When we measure this at a population level, we believe that five percent of the deaths that happen in a year are attributable to a loss; by that I mean, they might not have died if they didn't have this loss in their background.

Yikes.  But I do have a question about the data Dr. Niles presents.  In the normal course of events, parents die before their children, and statistically when the children are adults.  *All* of us lose parents (unless we predecease them, but set that case aside).  So that's one major loss we all face.  Many lose a spouse; many lose siblings.  Some lose children.  Are not the effects of those losses on the survivors already built into life-expectancy statistics?  If so, how does the effect of one such loss in one person's life mesh with the fact that life expectancy is already affected by these losses?  I can't quite get my head around how those two facts interact.  (I know, it's population statistics versus the experience of an individual.  Even so. . . .)

            Needless to say, if my life expectancy was 85 (a number I made up) before Krystin died, and it's now lower, I am not thrilled.

In a book review of Advice for the Dying (and Those Who Love Them): A Practical Perspective on Death, by Sallie Tisdale, the reviewer quotes Tisdale to the same effect that Niles did:

After the death . . . there are the survivors and their grief. Writes Tisdale:

"Grief lives in the body. MRI studies show that a grieving brain has a pattern unlike other emotions. Most of the time, an emotion lights up parts of the brain, but grief is distributed everywhere, into areas associated with memory, metabolism, visual imagery, and more. Grief can make you sick; it can be brutal, even deadly."

This has been shown by studies into the effects of bereavement on physical and mental well-being. Research done in the Karolinska Institutet in Sweden has shown that bereavement is correlated with adverse health outcomes like self-harm, hip fracture and unplanned hospitalisation. Similarly, results from the latest wave of the Irish Longitudinal Study on Ageing, run out of Trinity College, showed widowhood a greater predictor of frailty in older adults than being single or separated.

            Niles argues this is a public health issue that should receive attention, with a focus on ways to mitigate the impact of loss.  I suppose, but I'm not sure what anyone could do or say that would lessen the impact of Krystin's death on me (or her mother or brother or Kathy).

            I have another reservation—and this reflects a personal view rather than any data—is that (for me) the age and medical condition of the person who died affects the extent and duration of grief.  My cousin Mae lived to 94, had a full and active life, was ailing and unhappy the last year or so and was ready to go.  I don't know that my dad at 83 was ready to go but he'd had a full and active life and got to see all seven of his grandchildren at least reach their teenage years (and was headed for a decline from Parkinson's that he would not have liked).  My mother at age 62 was definitely *not* ready to go, and certainly Krystin was not.  In the latter two cases I grieve deeply; in the former two, less so.  It would be enlightening to parse Dr. Niles's data to learn if the impact of the loss varies with the age and condition of the decedent.  It may be that there is wide variation in effect, moderated by age and medical status.

* * *

            I sometimes wonder if I am guilty of omphaloskepsis when I write these emails.  I hope not.

            On that amusing lexicographical note, I wish you well for the week.

-- Gary 


Wednesday, March 6, 2019

#61 mosquito hearing, chirality, predicting snowstorms, poetry versus opera




Good morning.

            We joyfully await the 100% chance of heavy wet snow on Saturday (according to the National Weather Service).  Or not.

            This falls in the "I hope it's not so" category of research findings.  "Mosquitoes can hear over distances much greater than anyone suspected, according to researchers at Cornell and Binghamton University."  It seems that up now, biologists believed that eardrums were needed for hearing at a distance and that other means of hearing only worked at very short distances (like a few inches).

A series of experiments has now provided neurophysiological and behavioral evidence that Aedes aegypti mosquitoes -- which transmit such diseases as yellow fever, Dengue, Zika, West Nile and Chikungunya viruses -- can hear specific frequencies as far away as 10 meters (32 feet) or more.

Wonderful.

            Moreover, the frequencies the mosquitoes can detect include both that of female mosquitoes in flight—and human speech.  I did not know (or particularly care) that mosquitoes breed in flight, so it is no surprise that males are attracted to the beating of a female's wings.  I can't picture how they did this, but the researchers "fitted mosquitoes with an electrode in their brains and made neurophysiological recordings of the auditory nerve" from some distance away.  Apparently the hairs on a mosquito can "sense sound from air particles vibrating at certain frequencies."  They can "hear" people speaking.

            This is not reassuring; it's "we don't know":  their work "offers no proof that they use it [sensing sound] to home in on people. The insects are known to pick up sensory cues such as carbon dioxide, odors and warmth to locate people."

            What amazes me about this research is that electronic devices are now so small that they can be put on or in a mosquito.  I know electronics have been shrinking over time, but that's really small.  Presumably these electrodes do not interfere with the ability of the mosquito to function.

The usual question:  so what?  The results, they say, "do not offer viable new avenues for mosquito control" but, oddly enough, "they open the door for developing highly sensitive directional microphones and hearing aids that use fine hairs that sense the speed of air particles as they are jostled by passing soundwaves."  Another case of goofy-sounding research potentially having real-world application.  In this case, totally removed from the focus of the research!

* * *

            Tell me what this is about:  "Chirality of Weyl fermions."  I did not look at the article or the precis.  I do not know the meaning of three of those four words (I understand "of").  I don't pose the query to be snarky.  What it tells me, in four words, that there are advanced fields of research that, at least in short descriptive titles, are far removed from much human understanding.  Certainly from mine.  I assume that whoever looks into this chirality could explain it to me if I were to inquire.

* * *

            Those of us who have lived in the Twin Cities for a very long time (that is, at least back to 1990) remember the Halloween blizzard of 1991 and the "Black Friday" blizzard in late November that followed a month later.  Especially in the case of the first one, the National Weather Service really blew it on the forecasted amounts of snow, and I recalled (I thought) it had to amend and amend the forecast as the storm progressed.  Kathy and I were talking about those snowstorms recently (as we were watching a heavy snow coming down), and wondered if the NWS technology had improved in the intervening 30 years so that they wouldn't be likely to miss like that again.  So I wrote to my buddy Kenny the meteorologist/climatologist.

            Kenny told me that I recalled correctly. 

The Halloween Blizzard was initially forecast at 3-6 inches, then 4-8, and even when 8 inches fell on Halloween alone, before the main event was supposed to have started, forecasts were modified to 8-16 inches. Not until most of the damage had been done did the NWS get the forecast totals right. Part of the issue was reasonable disbelief: how could a storm at that time of year really produce that much snow? Also, models were worse and there were fewer of them. The best ones at the time only had forecast about half of the total precipitation, and never picked up on the track and intensity of the main low pressure system.

However, the late November, Black Friday blizzard one month later was much more traditional, and the NWS was able to see a big one coming a few days in advance. Forecasts for a foot of more were in place a day in advance.

            Kenny said my hypothesis was correct:  with newer surveillance techniques and equipment, "I do think it is much harder to get surprised by something as enormous as the Halloween Blizzard."  More usual, he said, is predicting totals that are higher than what falls or making errors about location.  In the case of big storms, "the last really big bust was on March 8-9, 1999, when MSP recorded 16 inches of snow, after expecting 3-5."

            So the last mistake of any significance was 20 years ago.  It's a little disappointing, in a way, that we're unlikely to get bushwhacked by a storm like we did at that Halloween.  I've noticed that this year, for example, the NWS snowfall predictions are usually pretty close to what actually falls.  But it was fun in a strange way to see that snow keep coming and coming, beyond all expectations.  Of course, with global warming---which likely means more winter storms for us—we may see snow coming and coming with greater frequency than in recent decades.  When *we* (that is, my cohort) was in elementary and secondary school, we had to walk (not bus) through mounds and walls of snow.  Elliott and his generation missed that challenging experience.  Perhaps their children will get to relive our experience.

* * *

Some time ago a long-time friend and I exchanged messages on different art forms.  He is a talented guy who is, among many other things, a published poet.  I don't read poetry but I sent him a link to an article I thought he might find interesting.  He wrote back after he'd read the article.

If you read it, I’d be curious to know what you think, as an avowed non-reader of poetry. Part of the thesis in the book he is reviewing . . . is that Poetry is set up on an unrealistically romantic-idealist pedestal in the way that it’s taught and discussed, so that most poems fall short of attaining "True Poetry" and most erstwhile young poets also fall short and early on give up on writing (or continuing to read) the stuff. Does that seem on the money to you? As one of a tiny minority in this country who actually grew up enjoying poetry and who has continued reading lots of it throughout my adult life, I am still a little bit confused about why hardly anyone else finds it as interesting and enjoyable as I do.

            I wrote back to tell him that part of my lack of understanding of most poetry is that I'm particularly dense when it comes to allusions and imagery.  I mostly have no idea what the poet is talking about when I read poetry.  There's a reason I was a good academic staff member:  I dealt with straightforward stuff that required little creativity (in any artistic sense of that term).  Just facts and processes and policies and writing them or writing about them.  At the same time, I confessed that I have not given poetry a serious effort, although I did try to help both my kids when they had poetry sections in their high school English classes.  They hated it, and when we'd work together to try to read and understand and interpret a poem, we were all hopeless cases (like father, like children, it seemed).

I also contended that poetry is more difficult than other forms of artistic expression.  Opera?  The plots are only barely plots, only there to provide roles for singers.  Classical music?  Most of it one can listen to without believing there's any particular message or understanding one should take away from it.  Sculpture and painting?  Sometimes there's more than what meets the eye, but even if one doesn't understand that, it's still possible simply to enjoy the work.  Plays?  When there is a subtext, usually a good program will provide context and analysis.  And so on.  I can understand, in varying degrees, these expressions of creativity.  Not so with poetry, at least for me.

It may be, I concluded, that I just like reading non-fiction, which is about 80% of my reading.  That which isn't is junk reading purely for pleasure, like murder mysteries.

My friend responded and gave me a marvelous disquisition on poetry.

What you’ve written makes sense to me, mostly. It seems that one of the things you’re getting at is that most of your reading is more or less utilitarian, intended to get you from point A to point B in some way (to gain knowledge or answer questions or explain rules or lay out plans) and the rest helps you relax, transporting you effortlessly and vicariously elsewhere, providing you with entertaining cinematic plots and characters, that sort of thing. . . .  I get that, whether or not the content is challenging, your reading material of choice is easily digestible in terms of presentation format:  solid, standard, serviceable prose, meant to communicate clearly while not calling attention to itself.  Right?  [I told him that he was indeed right.]

I think I’m drawn to poetry for two complementary reasons: to enjoy the musical qualities (tone, cadence, rhythm) of the language itself, and to invite a dialogue between the printed page (or the poet out loud) and the part of my mind that works with metaphor and intuition and the appearance of the macrocosm in the microcosm. For the first, another person might rather go straight to music or song lyrics, with some bits of Joyce or Faulkner thrown in to taste. I suppose most would go to church or read philosophy for the rest. 

Your point about helping your kids with poetry homework assignments reinforces my impression that teaching poetry in school often misses both of these reasons, and turns the poem under consideration into some sort of complicated logic problem or sudoku-style puzzle. As a substitute summer school English teacher, I’ve had that same job, helping kids work through the proscribed comprehension questions in the textbook. If poetry was all about analysis, I would join in the usual response: "if that’s what they were trying to say, why didn’t they just say it?!" Like many types of analysis, that whole process takes all the joy out of the poem. From what I can tell, our language arts curriculum (especially at the secondary level) is still under the thumb of the New Criticism of the 1940’s and 50’s, where the emphasis is on explication, picking apart the bones of the poem to examine its central "argument". And the patient dies on the operating table.

I’m pretty sure most contemporary practitioners would agree that this is not the way poetry should be taught. Unfortunately, most high school English teachers either enjoy that sort of thing, or else they dislike poetry as much as their students but figure a small painful dose of it is good for them. So students end up viewing poetry about the same as Huck Finn viewed castor oil.

My friend proposed we exchange examples of art forms:  he'd send a poem he liked and I'd get him to an opera.  He sends me the occasional poem (most of which I like), but he hasn't gone to the opera with me (yet)!

-- Gary

Most Read