On Being Open

I read this book because I thought it might help me create better characters in my novels. As an author, I struggle to create characters who don’t sound and act just like me. I’ve often chosen females as my first-person narrator just to force me out of my own head. I’ve tried black people, immigrants, aliens (I mean space-aliens) for the same reason. It’s a struggle because I’m me and not anybody else. How do actors do it?  They can be anybody.

I did learn some interesting lessons about acting from this book. Actors simply gush over it, judging from reviews. I’m mystified by that.

Esper co-taught with legendary acting coach Sanford Meisner for nearly two decades and apparently wears the mantle now.  I’m no actor so I cannot fairly describe what the “Meisner method” is, but this book purports to transmit at least the flavor of it.

The chapters and scenes describe a series of workshops conducted by Esper with a group of eight actors with various levels of experience.  Esper gives instructions on how to execute a certain scenario, such as walking into a room, interrupting someone, then each student tries it, followed by extensive feedback discussion.

I’d say the main lesson conveyed is to be a good listener, which means don’t worry about your lines and how you’re going to deliver them but be open and receptive to the other actor(s) and let yourself react “naturally” to that person and the situation. Of course you will react with your memorized lines, not your own speech, but beyond that, your “acting” must be your genuine reactions. In other words, be extremely open to experience.

Some people are naturally open to experience (their own and others’) and some are not. It’s supposedly a fixed, or inborn personality trait, measured on the “Big Five” inventory (many versions online). Some people live with openness to experience and some don’t.  If you are not accustomed to feeling the presence of the other and to being open to your own feelings, then this book might be a revelation.  Otherwise it might seem like a compilation of incredibly mundane observations masquerading as pseudo-wisdom.

It’s certainly not well-written. The co-author, DiMarco, is an MFA actor, although he has written other books, but his attempts to introduce drama into scenes that have none are cringeworthy, as are his attempts at elevated description.  On the other hand, it seems like he did understand and convey the Esper acting lessons, which was the main task.

Did I get what I wanted, insight on how to write better characters? Yes and no. I didn’t learn anything new, but my attention was shifted to the idea of openness.  I decided I could improve my characters by being more open to their presence (which presence I have to create first, of course – no small feat).  I think that will improve my work.

As for dialog between characters, I decided that I am pretty good at having characters be open to each other, so that was satisfying. So I didn’t pick up any new writing technique but I did shift my attention a little in a way that I think will be helpful.

I still don’t understand the acting magic, which I appreciate, but don’t see revealed in this book. This openness technique would be very practical for improv, it seems, but for a scripted, directed part – I don’t really see how it would help much. But I’m not an actor, so…

Esper, William, & DiMarco, Damon (2008). The Actor’s Art and Craft: William Esper teaches the Meisner Technique.  New York: Random/Anchor (286 pp).

Ten Things That Don’t Work

These products and processes don’t really work, at least not as they should.

  1. Cell phones

Amazingly, after a quarter-century, cell phones still do not work reliably for making simple voice calls.  I still have to move to the south side of my house to keep a call from breaking up into unintelligibility. “Can you hear me now?” is no joke. And I live in a major American city, on a hill, not in some undeveloped rural backwater. Sure, voice calls work adequately most of the time, but why not all the time? Why is it still a challenge to make a phone call?

  1. Ice dispensers in refrigerator doors

The old-time aluminum trays with a lever that had to be yanked vigorously and multiple times to release broken ice cubes while the tray itself stuck to your hands – those were no fun, but since the 1970’s, small, colorful plastic trays have been around that release the ice cleanly with a twist. I never had any trouble keeping a plastic bin of ice in the freezer and I scoffed at my first refrigerator with in-door ice cube dispenser. Well, I admit I like the convenience, when it works, but that’s only 75% of the time. Often the dispenser either grinds and grinds with no output, or jams and beeps, or spews ice all over the floor. It’s just not a reliable technology. How hard can it be?

  1. Wireless printers

A wireless printer should be a no-brainer. The computer is on Wi-Fi and has no trouble staying connected and sharing on the home network, but the printer!  For some reason it can’t seem to stay online. Quite often the computer tells me the printer is “offline,” and I feel that there ought to be a button labeled, “Well put it back on line, then.”  But instead I have to go into the control panel and find and select the printer and click it back online. Sometimes I have to recycle the power on the printer. Ironically, the printer is only two feet from the computer and a short piece of cable would make it 100% reliable, though not shareable. It’s inexplicable.

  1. Gift cards

Judging from the kiosks holding ten thousand of them in supermarkets, gift cards are extremely popular.  Sometimes, it’s just the right thing as a “Thank you” token, so I’ve chosen from the multi-colored display, and that’s where the trouble begins. First, it’s unclear what they cost. I presume the cost will be face value plus some fee which represents the seller’s costs and profit margin but that fee is not known until after you’ve bought it. It’s a crap shoot. The fee can run as high as 25%. Plus, the card has to be “activated” by the seller, but I have purchased gift cards and paid the fee, and later heard from the recipient that at the time of use, the card was not “activated” and the fee had to be paid a second time. Surely that was an error, but it is impossible to know anything about your gift card’s status until the recipient tries to use it. Even then, how much value remains on a gift card is a guess. Some receipts will say what it is but most don’t. When the purchase price is more than the remaining value on the card – good luck at the chip-only card reader at the checkout stand. How can such a non-transparent and hard-to-use product be a success in the market? It’s a mystery.

  1. Car batteries

Have you ever noticed that your car battery goes dead just at the moment when you need it to start the engine?  Does it go dead in the middle of the night when nobody cares then send you an email alert? No. You find out at 4:30 am when you’ve got 40 minutes to make it to the airport. Is that really the best we can do?  I vaguely remember, back in the 1960’s, cars had a gauge with a red needle that told you the status of your electrical system. What happened to those?  Now the day your car battery will die is as unpredictable as your own death. You know it has to happen, but you also know it will be a surprise.

  1. Postal Address

What is my “postal address?” Is it the place where I sleep most of the time? Where my refrigerator is? Where I park my car? My postal address should be an arbitrary identifier, like my email address, designating  a point in space where I want my mail delivered.  But if you change your address, even for 6 months while you’re away on assignment or vacation, or in the hospital, or for any reason, watch out! Suddenly your tax obligations change, your school tuition doubles, your insurance may be cancelled, and your employment status may be jeopardized.  Merely “forwarding” your mail to a new location is not a viable workaround. The postal service calls that a “temporary change of address,” with all the consequences any change of address brings with it. The idea that a person is located at a postal address is wrong, a holdover from another era. My postal address should be where I want my mail delivered and nothing else. It should not presume to indicate “where I live.” Alas, I don’t see this confusion being cleared up soon.

  1. Home Alarm Systems

A home alarm system is supposed to protect your home from burglary, but does it?  My research into such systems revealed that 99% of alarm events are false alarms. My friends who have systems confirm this. In many cities (including mine) the police charge you $200 every time they respond to a false alarm, so these systems can get expensive. In addition, when the alarm is triggered, the dispatch company only responds about half the time, according to online reviews, and when they do respond, it is usually one to three hours after the event. I once had an alarm system that notified me directly by email and text when my alarm was triggered, supposedly an “advanced feature.” But I discovered that the alerts were often more than 24 hours after the fact.  You get an alarm alert, you take time off work (if you can), rush home to find everything is normal. It was just a pet, or the wind, or a brown-out or something forever inexplicable. And if there really were a burglary, and you and the police were dispatched say within a half hour, what good would that do?  A house burglary will be over in five minutes. A home alarm system is not a viable commercial product, but people buy them.

  1. Computer Tablets

It took me a long time to understand why anyone would ever want an iPad or its Android equivalent but finally the price was low enough that I thought I’d get one for taking notes at a seminar I would attend.  It was easy enough to learn and to my surprise, the handwriting translation to text was remarkably good.  But when I tried to actually use it in class, I found out why a tablet is not a good product for this use. The fatal flaw is that you can only have one application open at a time. Oh, they advertise otherwise, sure they do. “Use multiple apps at once!” Turns out though, it that’s only for certain apps, ones which they choose, not the ones you would like. So you can have a mapping program open while your email is open. I guess that’s something. But for taking notes in a class, I need to have, at a minimum, several reference documents open, including reading notes I already made, a browser with multiple tabs, and the note-taking app itself.  I’m not there to record what the speaker says, like a stenographer. I’m there to understand, to process, to learn, and for that I need reference material. So I went back to taking notes on paper and using the tablet to look up references, but again, only one reference at a time, a very slow process. I finally gave it up as more trouble than it was worth and concluded that tablets are designed for consuming information, like shopping online and movies (though not both at once), but they are no good for producing information, which requires multiple streams of data.  And I wonder, what kind of a weird limitation is that? Why is it necessary?  Memory is cheap. CPU is cheap. What is the hold up? Nobody knows.

  1. Fruits and Vegetables

The problem with “produce” (odd term) is that fruits and vegetables no longer have any taste. They’ve been engineered to survive shipping and storage, with thick skins and permanent color, to maximize the seller’s opportunity to buy in bulk and sell them to you before they go rotten, at least, visibly rotten. They look good even when they’re old. As a vegetarian, I care about how my food tastes, but even when I buy at farmers’ markets, nothing tastes like anything. Tomatoes? They have texture and a little bit of aroma left, but that’s about all. Potatoes, the same. Spinach? Nothing. I could go on and on. I’ve had avocadoes in my kitchen that don’t ripen at all after weeks on the counter, and others that look and feel perfect but are completely rotten on the inside. You can’t tell by looking any more. And today’s food is flavorless. Except for fruits, which have been managed into dense sugar blasts. Most apples are now so sweet, I find them inedible. A pineapple will send me into a coma. Strawberries do not taste like strawberries; they taste only like sugar. Grapes? Never mind. I’ve learned to buy only more obscure fruits and vegetables that are apparently not worth the engineering time and money to have been messed up yet.  I dare not name them.

  1. Self-publishing

Self-publishing is easy and cheap to do. I’ve done it. The problem is that nobody will find or read your book. Getting published is not a problem anymore. Anybody can make books available for sale online as e-books, and, for a very reasonable price, have handsome physical paperback (or hardback) books printed “on demand” and for sale as well. But very few people will ever read your material. Five thousand new ebooks are published every day (every day!)  and at least ten times that many printed books are published every year by the major publishers for bookstores. That’s a lot of books. Why would yours attract any attention? Of course, your five friends and your mother will buy your book, but is that enough? For some people it might be. If you’re trying to reach an audience, it’s not. If you have a captive audience already, you’re home free. For example, you have ten thousand twitter followers, or you’re a public figure or head of a large organization. Then you have an audience, though a narrow one. Yes, there are a dozen examples of books that have been “discovered” from the self-publishing domain and become popular hits. Somebody always wins the lottery too. That’s not a meaningful fact for a writer who wants to find an audience.  So when people say they are “a published author,” it means little. Publishing is not the trick. Finding readers is the trick. Self-publishing can be done, but that doesn’t mean it’s a process that works.

Words and Spear Points

The destiny of reading and writing is extinction, like the skills of operating a spinning wheel, cutting a quill pen, and knapping flint tools. Reading and writing will fall into increasing disuse until they are known and practiced only by specialists and hobbyists, the way some people still use Morse code or know how to read Old English.

Replacing reading and writing will be talking and listening mediated by video and music. Photographs too, but video, movies, and then holograms or VR presentations, whatever is the next thing,  it will be visual and or musical or both, in quality. We’ll still talk to each other in face-to-face conversations, although those will become increasingly rare even inside the home.

It’s already happening, as anyone can see by examining the state of communications technology. I had thought until recently that the trends to music and video were merely preferences enabled by cheap, available technology, not that these trends would utterly displace reading and writing.

Geological Time

I was doing research for a book I’m writing, research that immersed me into evolutionary and geologic scales of time. I became aware again of how recent modern human culture is. Imagine a timeline of history on earth, stretching from the ground to the height of the Empire State building, homo sapiens represented by the thickness of a postage stamp at the top of the building, and modern culture since the last ice age as the thickness of the adhesive on that stamp. On that scale, the duration of literacy doesn’t even show up. It’s a blip.

I am an avid reader and writer, always have been, but it struck me that the practice of visually scanning and interpreting tens of thousands of words in one long serial order and then making sense of the whole, is absurd on the face of it. That doesn’t even sound like a viable methodology. If somebody proposed it, you’d laugh.

Clovis point

Reading and writing have been highly valued skills in society during my whole life. They certainly have paid my rent. They are skills like being able make good stone spear points, difficult to learn but very useful and highly valued by the society – in their time. Yet I always assumed reading and writing were forever. They are the very definition of civilization. Fifty thousand printed books are published every year, with ten or twenty times as many new ebooks. The very foundation of society seems set in the written word – history, literature, law, science.

The culture of making stone spear points must have seemed just as important, vibrant, all-encompassing and never-ending in its time. What could ever replace a good spear point? People will always have to eat.

The records of civilization are increasingly kept in video format. Movies are the new literature. History is conveyed in documentaries and popular songs. Science reporting now emphasizes ‘data visualization.’ Students don’t read textbooks, they watch YouTube instructions. And fewer people are reading all those printed books that come out every year. Bookstores are disappearing, as are newspapers and professional journals. I doubt if anybody is reading a million new ebooks each year. Reading and writing are technologies that have peaked.

Why am I still committed to reading and writing? Looking with a cold eye, it’s only what I’m used to, what I’ve always done. Reading and writing have  been successful for me, but they’re obsolete. I need to get with the future.

I estimate reading and writing will last longer than I will. They probably have another hundred years to run but extinction is their destiny. Hard-won skills though they are, they just won’t be needed in the future except in niche applications.

So I’ve decided. When I finish writing my current novel, it will be my last. Henceforth I will turn my attention to video communication. I don’t know how, but I’ll figure it out. Maybe there’s an instructional book I can read.

Salt and Fire

Salt and Fire

I don’t normally do movie reviews on my blog because there are so many movies and so many reviews already. But Salt and Fire, the latest Werner Herzog movie, is deserving of my attention because it is almost universally despised. Only twenty-five percent at Rotten Tomatoes, and only four stars at IMDB, and reviews like “Easily the worst movie Herzog has ever made.”

I disagree. It’s a terrific movie and I’ll try to explain why. I went to it because I love Herzog and I’m a fan of Michael Shannon (See him in “My Son, My Son…” (a Herzog movie) and in “Ice Man” and in “99 Homes.” )  I didn’t know Veronica Ferres, a mega-star in Germany. 

The main complaint against Salt and Fire is that it has essentially “no” story, poor script, stiff acting, and its images are unimpressive. But that’s all misleading, perhaps deliberately so. I think it’s a philosophical essay on the theme of “perceptual distortion.”

I can’t quite put it all together, but if I were going to, here are the elements I would focus on.

  1. The story line was so weak as to be almost non-existent. The dialog was stilted, the script chaotic and the acting alternately wooden and overdone. Action was ludicrous and plot development implausible. The slight ecology theme was perfunctory. Even the locations and sets were implausible (e.g., hasty new paint on a run-down hotel). Sounds great?  Well, none of that was accidental.

The message is: Don’t look here for traditional storytelling. Your idea of a movie as a drama played out on a stage does not apply. The whole methodology of storytelling is actually a cultural syntax and not a natural mode of communication. It is a distortion, and this movie will distort that traditional mode to illustrate the distortion.

  1. Visual distortion is the main theme. Movies especially are a learned syntax for perceiving. You normally accept a movie as a window on a reality rather than the artificial construction that it is. Twenty-four pictures per second become a moving image only because of a perceptual illusion but you don’t even think about that. You accept the movie in the natural attitude.

Anamorphic compression

Movies as managed distortion is seen in the anamorphic lens, an oval or cylindrical vertical element in the camera that compresses the image on the horizontal axis.


Uncompressed Pic

The projector then has a “reverse” lens that re-stretches the image back to normal size for viewing.

The process is widely used and was used in Salt and Fire.

Anamorphic photography is used is to enhance image density and saturation but especially to capture wide-screen shots with good detail. I believe this is why the movie was shot on the salt flats in the first place. It is about as wide-open a location as you could imagine, with hardly any detail. It is pure wide-openness. I’ll bet Herzog’s thinking went from the anamorphic lens to that particular location, not the reverse.

Other visual, photographic tricks were shown throughout the movie to clue the viewer into this theme – hey, it’s anamorphosis over here! Herzog practically hit us over the head with it.

Mirror anamorphosis

We also saw a big demo of mirror anamorphosis, described by Shannon’s character.  The theme was repeated again at Shannon’s description and flashback of the anamorphic painting in Rome, a tradition in art since the Renaissance.

So we get the message: pictures, especially in movies, are not “real.” We tend to view them as mere windows on reality but what they show depends entirely on your point of view.

  1. The photography very much emphasized close-ups of the actors’ faces – extreme close ups, often from forehead to chin, and those pictures were held for up to fifteen seconds. That’s unusual and it was very frequent in the film. Why did Herzog do it?

Traditionally, that’s how you show that a character is thinking, and often Herzog had a voiceover to give the thoughts during the closeup. But that doesn’t fully explain what was going on.

Anamorphic mumps

One additional explanation is that the closeups illustrate “anamorphic mumps,” a distortion inherent to the anamorphic lens which broadens the face unnaturally in over-correction and makes the actor look like they have mumps. This distortion can be mostly corrected in post-production. A. mumps are especially prominent and hard to correct when the face is off-center on the screen. I’d have to see the movie again to decide if most of the facial closeups were off center. I can’t remember. In any case, Even though A. mumps were not obvious in the film, the facial closeups may have been another reminder about anamorphosis in visual perception.

Another aspect of the extreme closeups is the idea that faces themselves are distortions. You can’t tell by looking at somebody what they’re thinking. And in the movie, the voice-over thoughts delivered during the closeups were often extremely banal.  I don’t know exactly why. Something to do with the person within versus the public persona – especially emphasized in the highly distinctive faces of these particular actors, the leads, but also in the boys.

  1. There was a strong memento mori theme, the idea that people should be reminded that life is short and we’re all mortal. Several of the banal pseudo-philosophical quotations given by Shannon’s character were of this nature, and carefully repeated just to be sure we got them.

What was that about? It may be related to the supposed ecological disaster and the volcano threat – “Don’t forget, people, we’re all gonna die!”

Herzog may have been trying to tie the theme of distortion to our everyday perception of life.  You think your everyday life is important, but don’t forget:  memento mori!  You have the rumbling eruption “heard” just underfoot  as a powerful reminder.

The superficial story line also reinforces this view. You think you’re going on a routine scientific expedition and you’re suddenly in jail and then in a life-threatening situation. Memento mori!  Life itself is an illusory distortion.

I cannot right now add up all these elements into a coherent whole, but I do believe the film was a didactic presentation (as Herzog films tend to be), concerning the topic of perception as distortion.  It was not supposed to be, and it wasn’t, traditional storytelling.

The interesting thing is that he didn’t do any overt, obvious distortion in the photography, which would have been very easy.  Instead, his message was subtle but clear, as if to show both the distortion and how readily we overlook it.

I need to think further about the idea that storytelling, as we know it, is a fraud.  But it was a great, thought-provoking, and satisfying movie

And that’s my story.


Emma Bovary: Airhead?

Mauldon Translation

I read Madame Bovary in high school, in French, which is to say, I didn’t read it. What I did was spend many hours with a French-to-English dictionary. I was eager to read it as an adult, this time in English.

The classic novel is a slow story by modern standards. Country girl Emma, daughter of a pig farmer, is married off to a country doctor, Charles, where she achieves a comfortable life. But she dreams of luxury and fine clothing, food and furniture, dancing all night, and above all, she dreams of romance. Charles is boring as dirt.

Bored to death, she embarks upon a series of affairs, during which she spends her husband’s money extravagantly, leading inevitably to disaster for them both. The novel was scandalous in its time (1856),  of course.

It is difficult to understand the psychology of country folks in Normandy a hundred and fifty years ago.  One of the enjoyments of the novel is the insight Flaubert provides into that, although it is impossible for me to know, without considerable research that I am not willing to do, what parts are real and what parts are fantasy.

For example, Charles has to be the most dimwitted cuckold in all of history. Even a dog could see that Emma was fooling around. Could the taboos against adultery in country life have been so rigid as to make it unthinkable and invisible? I doubt it. Adultery has been going on for a very long time in all civilizations, in all classes, in all species. So Charles’s ignorance seems improbable.  Toward the end, when the evidence is overwhelming, he sort of chooses to not understand, which is slightly more believable.

And what of Emma? Could there really be such an empty-headed, unsocialized, untutored woman so possessed of childhood fantasies?  Maybe. Women were, as a cultural practice, untutored and badly socialized, and the only child of a pig farmer would have had limited opportunity for psychological development. Still, it is hard to believe that as an adult Emma could be so utterly bored and self-centered, especially since she is presented as articulate, literate, witty, and talented in music, sewing, and finance.  So she is not believable either.

Add those two ciphers together and you get… nothing believable.  The novel mechanically plods to its inexorable conclusion but I never was engaged with the characters.

The story is presented with mud-on-the-boots realism, so there is plenty of insight into everyday life, which I appreciated in the well-annotated Oxford edition that explained almost every reference to obscure practices, foods, religious ceremonies, medical procedures and news articles.  All that helped in getting a clear glimpse into lives in another century.

Mauldon’s translation, while perhaps not as lyrical as Lydia Davis’s more celebrated one, is good on rendering the sweat and roughness of everyday life in simple and coarse terms, also contributing to a compelling sense of seeing life as lived on the ground in that time and place. I enjoyed that quite a bit.

Finally, I admired Flaubert’s craftsmanship. It is often said that he invented the modern novel and that could be right.  He supposedly did invent the narrative technique of free indirect discourse (FID) wherein a third-person-close narrator temporarily dips  into first-person voice to express the mind of a character.

I think the first official use of FID in the modern novel in 1856, occurs on page 11, describing Charles’s boyhood.  He’s walking along the banks of the river, experiencing the great outdoors:

“Opposite, above the rooftops, he could see the vast, pure sky, and the red sun setting. How wonderful to be in the country!”

The first sentence is from the third-person narrator, but who gives voice to the second sentence?  It’s a blend of narrator and Charles himself, and there it is, the historical moment of the invention of FID.

I also appreciated Flaubert’s detailed descriptions of the environment.

“The ballroom was stifling; the lamps were growing dim. People were moving out into the billiard room. A servant climbed onto a chair and broke a couple of panes; at the sound of the shattering glass, Madame Bovary looked round and saw, in the garden, pressed against the window panes, the faces of peasants, staring in.”  (p. 47)

I can experience that. It’s detailed, sensory writing, enjoyable and admirable.

Overall then, the book has many fine qualities that make it a deserved classic, even if strong character and plot are not among them.

Announcing Launch of Psi-fi.net


I have launched my new web site and blog www.psi-fi.net.  That’s where I promote my psi-fi books (should I ever have any), and meanwhile comment on their development.

Awkwardly, at this time, I have zero commercially-published books of psi-fi. For now the site is a platform for the idea of psi-fi.

Psi-fi is an offshoot of sci-fi (and pronounced the same), but the “psi” (Ψ) stands for psychology and the “fi” (Φ) stands for fiction.  Unlike sci-fi, where the emphasis is on pushing the boundaries of science and technology, psi-fi pushes the boundaries of human psychology.

The tradition of psi-fi goes back centuries though obviously it wasn’t called that, since I just made up the term.  One could argue that Homer’s Odyssey is an example of psi-fi. It used fantastical elements (such as the Cyclops, Circe’s island, Athena’s magic bow, etc.) not to speak of the whole Olympian pantheon, to highlight the human condition as experienced by Odysseus.

Psi-fi differs from traditional literary psychological fiction, such as “Crime and Punishment,” or “Madame Bovary” in its use of imagined technological or other counter-factual elements in the telling of the story.  Use of those “magical” elements allows a writer to throw light into some of the more inaccessible corners of psychology.

A more recognizably modern psi-fi tale is Cosmographia, published in 1544 by Sebastian Munster. It described imaginary travels to far-away lands where the inhabitants (“aliens”) were monsters, with the heads of dogs or eyeballs on their bellies. Clearly “they” are “other” and “we” are the good ones.

Jonathan Swift’s later Gulliver’s Travels could also be counted as psi-fi in a more subtle way.

A lot of modern sci-fi still follows that pattern and that message, but modern psi-fi should be explicit and realist about the psychological themes. It’s not enough to say merely, “we are the good guys.”

A more familiar early modern example of psi-fi is Herland, published in 1915 by Charlotte Perkins Gilman. In a remote part of the world, a society is populated and controlled entirely by women and birth is by parthenogenesis. Three male explorers land their balloon there. Clearly they will need to be re-educated. Drama ensues.

I have at least five psi-fi manuscripts drafted, two of them in “ready-to-go” condition, three in various stages of readiness. If, after trying, I decide none can be sold commercially, I’ll publish them myself, so eventually, they will all be available to my hungry, clamoring public.

Deep Structure of Capitalism

When I learned that world-famous linguist and political commentator Noam Chomsky would teach a class at the University of Arizona, I signed up.  A lot of people from the northeast come to Tucson for the winter, but most of them don’t come directly from MIT to teach a university course.  Whatever his reasons, I was glad to take advantage.

The course was a series of 16 hour-long lectures making up a critique of capitalism. Chomsky, who is 88, read prepared lectures from a script. Though he was well-amplified and easy to hear in the large auditorium, his voice was frail and monotone. He never looked up from his notes. He used no visual aids.  He just stood at the podium and recited.

The lectures were the epitome of the dry-as-dust stereotype of a droning professor. Older students like me hung on every word, knowing that this is The Man and he knows whereof he speaks. It’s Noam Frigging Chomsky!  But I imagine the 18 to 21-year-old matriculated crowd were thinking, “Oh, God. How many minutes left to go?”

The 150-level course was entitled “What is Politics?” though that question was hardly addressed. According to the syllabus, “… politics is about who gets what, when and how, [and] where.” That defines politics exclusively as economics. I think most political scientists would prefer a more comprehensive definition, one, for example that also encompassed issues of group identity and values, pursuit of common goals, the structure of government, a forum for conversations, the exercise of power, and many other aspects.  Okay, I’d go with a narrow definition of politics just for the sake of the course. But then it turned out that the course wasn’t even about that.

Buried further in the text, the syllabus also said that “…the course will examine how industrial state capitalism has come to dominate our thinking as the only way to organize the political economy to satisfy human needs and wants.”  This was what the course was mainly about. It was a critique of late-stage capitalism and how we are “brainwashed” into accepting it without question.

On Tuesdays, the lecture was delivered by Marvin Waterstone, a U of A professor of Geography, and on Thursdays, lectures by Chomsky were followed by a question-and-answer session where a selected U of A faculty member would toss fluffball questions to Chomsky so he could expostulate off script for a few minutes. Those were actually the most interesting part of the course.

Waterstone, whose qualifications for co-leading the course were never revealed, filled his hour by summarizing the assigned readings for the week, often by reciting long passages from them verbatim and reducing others to PowerPoint bullet lists. Apparently, students these days don’t or can’t read the assignments.  I cannot imagine what benefit accrues from having someone read to you papers that you already read for yourself. It wasn’t as if he added context, historical perspective, contrasting ideas, examples, or linkages. None of that. He simply synopsized the readings.

How such a travesty passes for higher education was a mystery to me and my heart went out to the young students. It was a perfect example of what I have long suspected, that the purpose of education is to pound the creativity out of you so you will never again have an original thought. (As an ex-college professor, I flatter myself in believing that I worked against type).

At least Chomsky had things to say. I would rather he’d talked about linguistics, the field in which he made his name in the 1950’s by discovering (or inventing, depending on who you talk to) the generative grammar, the deep structure of language, the language acquisition device, and many other innovations. But he left all that behind long ago, and since the 1960’s has been a tireless critic of government, politics, and capitalism in the U.S. He was a prominent voice in the anti-Vietnam war movement and a scathing critic of the Bush wars on Iraq and Afghanistan.

So what is his grievance?  He believes, with good reason, that capitalism inevitably leads to exploitation of workers and ultimately to government plutocracy, rule by the rich, a situation we have arrived at in America. He has deep roots in Marxism, but he’s not “a Marxist,” if there are even any of those left. That set of ideas has its own internal contradictions, such as the labor theory of value, a foundational idea based entirely on a semantic ambiguity.  But we did read excerpts from Marx, Gramsci, and others. About half the assigned readings were quite valuable.

Chomsky’s preferred political alternative is “anarcho-syndicalism,” a mouthful, to be sure, which I had to look up. He mentioned the idea but did not press it in class. The goal of the course was to critique capitalist-based government in the U.S., without really articulating an alternative. He suggested that students should to “take to the streets” and “resist the lies” and “reject the common-sense assumptions,” and in general, return to the activist years of the 1960’s and 1970’s. Maybe this time wearing pussy hats? I don’t know, he wasn’t clear on what we should be protesting. He didn’t provide any clear agenda.

In the 1970’s we had the draft and that was personal and that was the basis for the street protests.  We said, “You politicians can lie, cheat, and steal as long as you do it quietly, but when you require me to stand up and take a bullet for you, I draw the line.”  We have not yet come to that breaking point again in today’s politics.

So despite Chomsky’s longstanding participation in government criticism, he did seem stuck in time.  He could, and did, talk in detail about a CIA-sponsored overthrow of a government in Guatemala a half-century ago, with names, dates, and incidents. But he said not one peep about Donald Trump and his administration, nor did he have anything to say about any American president or administration since Eisenhower.  My impression was that while Chomsky is extremely sincere, expert, and articulate about his displeasure with government power, he’s already a historical figure himself.

Another serious problem with the course was its tone, which was toxically cynical. For example, the so-called “War on Drugs” has been, contrary to popular opinion, completely successful. Did you know that?  Why? Because its purpose always was to sweep into prison non-economically productive members of society, get them off the streets and reduce the numbers of the poor that we have to care for with social programs. Under that goal, the “War” has succeeded.

Not only is that view unsupported by historical facts or population statistics, it is deeply cynical in attributing the darkest, vilest motives to the government. Other people’s motives cannot be ascertained, only inferred, so it is gratuitous at least, and mean-spirited at worst, to attribute such motives, especially in the absence of evidence. Not to mention that the cost of incarceration is far higher than any cost of social programs for the poor.  I agree that the “War on Drugs” was, and still is, a bad idea, but I don’t jump to the conclusion that it arose as a malevolent conspiracy.

This was a problem throughout the course. The darkest, most evil motives were asserted for anyone who disagreed with Chomsky’s agenda. In his lectures and throughout most of the readings, arguments were consistently one-sided, evidential quotations selective and secondarily sourced, propositions laden with innuendo and presumptive values, assertions with willfully conflated correlation and causation, and so on. All these transparently propagandistic rhetorical techniques were an affront to critical thinking and undermined the credibility of the course’s goals. After the first few lectures, I stopped taking any of Chomsky’s arguments seriously. At first I did some online research and discovered how extremely biased and one-sided his lectures were, then I lost interest. What a wasted opportunity.

But it was worse than merely a wasted opportunity. I was angry and disappointed that this sort of crass propaganda passes for higher education.  I felt very sorry, indeed for the youngsters in the class who, presumably, were not as able as I to see through the rhetorical fog the course was blowing.

My only consolation was that basically, I agree with the course’s premises: capitalism does lead to a deceptive and pernicious plutocracy. So maybe it isn’t so terrible if students come to believe that. But that point of view should come after information-gathering and critical thinking, not from having it forced down your throat by a couple of arrogant and disrespectful propagandists who hold academic power over you. These professors acted out the same abuses of power and truth that they accused the government of, but they were immune to the irony.

I’m glad I took the course. It sharpened my awareness of how greed trumps all, works  against even the very survival of the planet.  I guess it also made me feel more helpless than I did before. If I were much younger, maybe I’d do something about it. Take a law degree and go into politics perhaps, I don’t know. In the present reality, I can only wallow in despair. I don’t have much hope for the youth, if this is the quality of education they’re getting.

Psi-fi: New Literary Genre

Psi-fi Oakland libraryI have been wriggling against the sci-fi label since I accidentally wrote my first sci-fi novel a decade ago. I didn’t mean to write sci-fi, but the story had an AI android in it. I don’t even like sci-fi.

I’m a cognitive psychologist who left the academic life for the computer industry to find out if the mind is like a computer. I started writing fiction to dramatize what I discovered about human and AI mental capacities. To do that, in my stories I often use a robot or an alien as a contrast character, because “It takes an alien to understand humans.”

It’s extremely difficult to explain to people, and agents too, that my novels are not really sci-fi. They involve no alien invasions, space battles, plasma guns, warp drives, or rampaging robots. Instead, they are stories about consciousness and its vicissitudes. I stick pretty close to actual technology and AI concepts, with just a little exaggeration.

Where I stretch is in psychological descriptions and explanations, of perception, dreaming, memory, motivation, imagination, creativity, agency, socialization, empathy, and above all, the mind-body problem (how does the immaterial mind connect to the physical body?).

I use robots and aliens the way genetic scientists use “knockout mice,” with a few specific genes disabled or “knocked out” in order to see what those genes do. An android, for example, might be just like a human except lacking in intuition. How would that show up? It’s not about the robot. It’s about human psychology. Try explaining that to your dog.

I’ve strained to find comparable work in the literature. There are some classics, going back to Clarke, Heinlein, and Asimov, that put human psychology at the forefront, but not much in current work. Judging from what’s getting published (by surveying Publishers Marketplace), I would say that 95% of today’s sci-fi is actually in the category of fantasy, and indeed most bookstores shelve sci-fi and fantasy together.

The average reader doesn’t know much about science and cares less. Fantasy is what they want. I described one of my novels to an agent at a conference and his first question about my main character, a physician, was, “What’s his special power?”

There might be, should be, more interest in the mysteries of the mind than there is in science and engineering, because I know for sure that every reader has a mind and a body. All I have to do is make them realize that’s a highly problematic way to exist.

So I’m hereby declaring a new genre: “Psi-fi,” where “psi” stands for psychology (which is not a real science, regardless of what they tell you). The term is already in light use, but not in a literary sense, as far as I can tell.

A scientific interest group in Lahore is called psi-fi (www.facebook.com/LUMSPsiFi/) but they’re not involved in reading or writing fiction. There’s a “psychedelic music” group apparently obsessed with the I Ching (http://www.psy-fi.nl/) but as far as I can make out, they neither read nor write literature. A professor of philosophy in Texas apparently wrote a study of “the intersections of science fiction, superhero comics, and the paranormal” that incidentally uses the term, “psi-fi.” (http://boingboing.net/2011/01/26/psifi.html). But I don’t find anyone using the term in the way I intend, as a specific genre of psychological fiction against a technological background. So I’m taking it.

Psi-fi is hereby deemed a genre of contemporary literature. Now if only Barnes & Noble agreed with me.

Addendum: Psi2

I really should put forth a psi-fi “manifesto” at this point. Everybody has a manifesto. Alas, I don’t have one. I can however offer a list of just a few topics I consider appropriate to feature in a psi-fi novel:

Perception and reality: what’s the difference?

Chaos and pattern – in the eye of the beholder?

Intersubjectivity and its variants, its absence, its origins

Mortality – what is it?

Memory as fabrication

History as collective memory


Time and change (as experienced)

Self-awareness, metacognition and higher-order thought

Consciousness – kinds of, states of, absence of, conditions of…

Consciousness – natural vs artificial

Madness and  the social construction of reality

Personality – what is it?

Individuality – myth or reality?

Emotions and feelings. What good are they?

Intuition and conceptualization

Creativity (and counterfactual imagination)

Free will vs randomness vs determinism vs self-delusion

Knowledge, certainty and doubt

Knowledge vs belief

Language: Social construction, language games, Deep structure

Community (family, tribalism, Gemeinshaftsgefuhl)

Music (all the arts) – the proprius, Necker cube, Gestalt formation

The mind-body problem, intercorporeality, Merleau-Ponty

Spatiality and movement: alternatives to Kant? Einstein?

Entropy vs life, vs knowledge vs information

Rationality vs the Dionysian

Logic and reason – was Hume correct? Critical thinking.

Philosophy of science and constructed reality

The Dream that grips us all

The black hole and the folds of experience

Self-relating subjectivity per Hegel

Intentional inexistence per Brentano

Accommodation of the self to reality and vice-versa

Intrinsic motivation

The telos

Egocentricity vs self-transcendence

Radical subjectivity (Ramana Maharshi)

Gibson’s affordances

The ox-herding pictures

The delusion of self-efficacy

Greed – can it be stopped? Mitigated? Excised?

Love vs reason. Why don’t they mix?

Magic – how should it be defined

The Turing test (and its successors)

Convergence of biology and technology vs theory of evolution

The construction and practice of gender

Lies vs truth. Why do we care? – in practice, per Wittgenstein

Science as a special kind of conversation


The homunculus

Psi-no oilCould this be the genre slogan? “Use no oil!”  Has a nice ring to it, I think – almost as good as a manifesto.


Natural vs Artificial Intelligence

SuperintelligenceThis briefing by Nick Bostrom on the dangers of artificial intelligence takes up a serious and legitimate question: Should we be more cautious as we go about trying to improve artificial intelligence? What if an AI became so smart it decided to take over the world?  Silly? I think so. But it’s a question worth exploring, if only to dispel long-standing fear of the mythical “Frankenstein Syndrome.”

Unfortunately, the book is written by a philosopher with an engineering bent, without, apparently, much understanding of human psychology (real intelligence).  Consequently, the book is mostly a sterile exercise and often unintentionally humorous.

Perhaps the most fundamental problem is the failure to define intelligence of the natural kind. Bostrom unthinkingly uses the I.Q. index as a measure of it, but anyone who has studied the matter will agree that IQ equals intelligence only as a matter of convenient social discourse. People who take IQ tests produce scores distributed in a normal distribution and that’s a scientific fact, but there is no theory or explanation of why  or whether answers to the questions on an IQ test have anything to do with intellectual competence or “smarts,” whatever those might be. Here’s an example of the kind of question you might find on an adult IQ test:

Rearrange the following letters to make a word and choose the category in which it fits.


A. city
B. fruit
C. bird
D. vegetable

Correct answer: bird (parakeet)

If you can answer such a question, what does it mean?  Rapetek is not even a word so you can’t be expected to know it. Perhaps the correct answer shows you have experience with words, letters, and conventional  hierarchical categories of common objects. Does that make you “smart?”  Maybe. Another good answer is “Rape,” and none of the categories presented is appropriate to it.  Is that a less smart answer? (The question did not say I had to use all the letters presented).

The bottom line is that there is no generally accepted explanation for what natural intelligence is. An IQ score is merely a convention for use by educational and legal systems but it explains nothing. If you’re going to write a book about “superintelligence,” I would say you have to do better.

In a related vein, Bostrom seems to have never given two thoughts to the nature of intuition, creativity, agency, subjectivity, empathy, emotion, intrinsic motivation or aesthetics, just to name a few faculties of the intelligent mind that seem important. You would think a philosopher would be at least minimally familiar with current concepts in consciousness studies, such as the debate over qualia. He assumes memory is about storage and retrieval of data, which many people believe, though that is not supported by the scientific research (on humans).

The author proceeds blithely as if there were no question about what intelligence is, so what does he think “super” intelligence is?  It seems to mean symbolic problem solving at a rate much greater than humans can accomplish. Problem-solving slips in there as a new, undocumented re-definition of intelligence. Even if it were, why would “faster” = “smarter?”  What’s the hurry?

Such shortcomings, and many others, including rampant anthropomorphism, leave the discussion ungrounded, a mere exercise for its own sake, leading to nothing. As if that were not bad enough, the writing is execrable. Consider this description of how a “super” AI might solve a problem:

“…the programmer would simply specify a formal criterion of what counts as a success and leave it to the AI to find a solution. To guide its search, the AI would use a set of powerful heuristics and other methods to discover structure in the space of possible solutions. It would keep searching until if found a solution that satisfied the success criterion. (p. 186)”

In other words, the AI would use “methods” to search for a solution. I would do the same myself! Nothing is revealed by the author’s obfuscatory verbiage.

The book is “highly recommended” by Bill Gates, on the front cover. Maybe that should tell you something.  Better choices might be “Artificial Intelligence: A Modern Approach,” by Russell and Norvig, or “The Cambridge Handbook of Artificial Intelligence” by Frankesh and Ramsey.

Bostrom, Nick (2014). Superintelligence: Paths, Dangers, Strategies, Reprint Edition. New York: Oxford University Press, 2014. 415 pp.

Gratuitous Poetry

Blind assassin - Atwood_Margaret Atwood cut her writing teeth on poetry and it shows in her novel, The Blind Assassin, perhaps too much. Her phrases are carefully constructed, a virtue in any writer, but Atwood’s choices often stand out as slightly too clever, while not particularly insightful. As the book opens, the narrator’s sister, Laura, has died, and Iris, the sister, writes about her sister’s novel,

“Hard to fathom, in my opinion: as carnality goes it’s old hat, the foul language nothing you can’t hear any day on the street corners, the sex as decorous as fan dancers – whimsical almost, like garter belts.” (p 39)

I enjoyed “whimsical like garter belts” as a phrase, but what does it mean? Are garter belts whimsical? And even if they are, Laura’s sex scenes were “almost” that whimsical, meaning what?  This is one example of hundreds and hundreds throughout this long novel, of phrases that catch your eye but on closer inspection are close to nonsense.

“Such a thin book, so helpless. The uninvited guest at this odd feast, it fluttered at the edges of the stage like an ineffectual moth.” (p. 40)

Arresting image, until you ask yourself what an “ineffectual moth” is. What would an effectual one be?  And if you’re wondering why the narrator is still waxing on her sister’s novel, the answer is that Atwood waxes.  That’s why the book is too long.

In long, excruciating backstory, we follow the lives of the two sisters from when they were wealthy teenagers in a small town in Eastern Canada, to the death of Iris, seven decades later. What happens between are the two world wars and the depression, with the appearance of soldiers, businessmen, love affairs, marriages, babies, households and the stuff of life. As in much “literary” fiction, nothing really happens. It’s just ordinary everydayness piled high and deep. Getting through the novel is, as a colleague commented, an Iditerod of reading.

More than half of the eighty or ninety short chapters open with a weather report, followed by detailed description of the scenery, and continue into long, lush descriptions of walks in the town or country, food and drink, shopping, clothing, babies and children.  Such material may hold particular fascination for some readers, but it was suffocating for me.

Are there redeeming virtues?  Yes. The novel did not win the Booker Prize for nothing. One interesting aspect is the narrative structure. The whole novel is presented as a diary, or letter, addressed to someone, we are not told whom until the very end. The ending is contrived and clichéd, swooping into the final few pages like a Deus ex Machina.

Within this long diary, Iris, the ostensible writer and first-person narrator, tells the story of her difficult lifetime relationship with Laura, her wild sister who died in the first chapter in a car accident that always smelled of suicide.  Interspersed throughout the diary is a novel, called The Blind Assassin, supposedly the novel that Laura wrote. It is a cheesy sci-fi adventure, written in third person narration, and involves swordplay, monsters, space travel and foreign worlds.  It is stereotypical nonsense, badly written, clichéd, and pointless. However, the astute reader notes it would take considerable skill for someone like Atwood to deliberately write that badly on purpose, so it is interesting in that regard – only.

Finally, the embedded novel, The Blind Assassin is not simply inserted into The Blind Assassin, but rather told as a story, or a series of stories by an unnamed man who claims to be a writer,  to an unnamed woman, who we guess is Laura.  The point of view narrator for that part seems to be Laura, who asks the man repeatedly to continue with the story.  But what is Atwood’s point of view on that point of view?  That’s an interesting and tricky question that is not adequately dealt with, making the structure an interesting piece of experimental writing which in the end, breaks the implicit contract with the reader that says third-person narrators are always reliable. Still, I give points for the effort.

Another virtue of the novel is Atwood’s skill at finely detailed description of fixed scenes and especially of photographs. Atwood seems drawn to ekphrasis, a poetic term for written description of a picture or a work of art. The book opens with a vivid description of a photograph and that is a recurring theme. Ekphrastic writing tries to “tell a story” about a photo, film, or painting, a type of writing that is well-suited to Atwood’s narrative voice.

Finally, as mentioned, much of Atwood’s descriptive writing involves highly poetic language. Taking an arbitrarily selected 154-word paragraph …

“Today I had something different for breakfast. Some new kind of cereal flake, brought over by Myra to pep me up: she’s a sucker for the writing on the backs of packages. These flakes, it says in candid lettering the color of lollipops, of fleecy cotton jogging suits, are not made from corrupt, overly commercial corn and wheat, but from little-known grains with hard-to-pronounce names — archaic, mystical. The seeds of them have been rediscovered in pre-Columbian tombs and in Egyptian pyramids; an authenticating detail, though not, when you come to think of it, all that reassuring. Not only will these flakes whisk you out like a pot scrubber, they murmur of renewed vitality, of endless youth, of immortality. The back of the box is festooned with a limber pink intestine; on the front is an eyeless jade mosaic face, which those in charge of publicity have surely not realized is an Aztec burial mask.”

We notice it is pure description, a propos of nothing, as so much of the book is, and one is tempted to skim right on past with annoyance. But if a reader were to take the time to notice, some lovely constructions are buried in that pile of verbiage:

“candid lettering the color of lollipops” is a visual, vivid, creative, and original phrase well-worth savoring.

“Candid lettering, ” color of lollipops:” 2 syllables followed by three, 2x(trochee + dactyl), with alliteration! Not bad at all. The rhythmic nature of the phrasing is no accident and such constructions do not grow on trees. The astute reader must whisper, “Bravo!”

Is such a construction necessary, or even desirable in a book that’s supposed to be a novel, a  “dramatic” tale (though it contains no drama) where story is supposed to be king? That is a separate question.

“fleecy cotton jogging suits” is another fine phrase — I can almost hear the band playing that tune. Say it out loud and you’ll enjoy it. And it invokes both tactile and kinesthetic senses to boot.

“corrupt, overly commercial corn and wheat” — less good, but still nice.

“little-known grains with hard-to-pronounce names”

On this one, she should have said “difficult” instead of “hard-to-pronounce.” Try it out loud both ways. Is my version too obvious?

Some of Atwood’s phrases are visually arresting, even when not especially rhythmic:

“festooned with a limber pink intestine” — “festooned” is a lovely word, but followed by that particular noun phrase, well, it grabs you in the eyeballs (if not the guts).

So overall, I say that this 154-word paragraph does pay its rent, but that does not mean it should have been included in this novel. Rather, it could be construed as self-indulgent wordplay that shows contempt for a reader vainly searching for a story.

Atwood, Margaret. (2000), The Blind Assassin. New York: Doubleday/Anchor (518 pp.).