The brain is weirder than the sky…

We’ve been marking the launch of occasional Dabbler Bryan Appleyard’s new book The Brain is Wider than the Sky with a mini-Appleyardfest (read Brit’s review here and an exclusive Q&A with the author here). To conclude it, here’s Elberry on the human imagination…
Signed copy competition winners – congratulations to Dabbler Book Club member Sandra Paterson and to John Halliwell, who wins the copy reserved for the League of Dabblers.

A colleague saw me reading this book, and, glimpsing the cover, said: “Whoah – The Brain is Weirder than the Sky!”

“Wider,” I corrected him. But weird would do well, as in wyrd and becoming, the uncanny. The title is from an Emily Dickinson poem. It is not an idle borrowing; in a book devoted to science and modern culture, Appleyard returns to the right weirdness of poetry, to Dickinson and Wallace Stevens. He is wide-ranging and unpredictable; he writes here about celebrities, Apple, Microsoft, artificial intelligence, brain scanners, David Hockney. But this is typical of Appleyard at his best, for example his article on St Pancras: he draws as wide a circle as possible, to include the surrounding culture, what went before, what may come after. He thinks in spirals. Thus, the introduction:

This is a book about, in roughly this order, neuroscience, machines and art. It began when, in August 1994, I visited Microsoft in Seattle and spent a couple of hours with the company’s co-founder and then chief executive officer, Bill Gates. In the course of the visit, something began to form in my mind. It was too vague to be called a thought; rather, it was a mood, an anxiety, an uncertainty, a riddle, but it seemed to me, even in my vagueness, to be fundamental to the nature of the new world that was then just being born and in which we live.

But Appleyard, you may say, what is all this about vagueness and uncertainty, aren’t you writing a book? A serious book? Must you be so awkward? Yet it is a useful awkwardness. There is something not wholly modern about Appleyard, so while he is conversant with science and technology, he poses awkward, unexpected questions. In Understanding the Present, he asked: is everything algorithmically compressible, can mathematics model reality? And if so, is our experienced reality superfluous, an unnecessary elaboration of pristine binary code? In Brain, the focus is on technology; the technology, however, carries the same assumption, that the real can be represented and satisfied by the virtual, by manmade tools; and indeed that these tools can surpass their makers:

Human consciousness may be no more than strings of zeros and ones. Hans Moravec of the Robotics Institutes of Carnegie Mellon University imagines a robot surgeon sucking out a human brain, reading all the information it contains and then uploading it on to a computer. The person awakens to find himself just as he was and unaffected by the fact that he is now inside a machine.

[…]

Robotics could provide bodies for our uploaded minds, but it would be far better if these superior bodies had superior minds. This is exactly how Moravec sees the future, as a place where biological evolution ends and machine evolutions begins. ‘Unleashed from the plodding pace of biological evolution, the children of our minds will be free to grow to confront immense and fundamental challenges in the larger universe. We humans will benefit for a time from their labours, but sooner or later, like natural children, they will seek their own fortunes while we, their aged parents, silently fade away. Very little need be lost in this passing of the torch – it will be in our artificial offspring’s power, and to their benefit, to remember almost everything about us, even, perhaps, the detailed workings of individual human minds.’

I can hardly wait. If consciousness can be wholly represented in binary, then it can presumably be wholly accommodated, and replaced by, technology (let us build a city, and a tower, whose top may reach unto heaven). Already, we have lesser technologies, call trees:

These machines – starting with the telephone and ending with the various networked devices that have insinuated their ways into our daily lives – exert a twofold pressure. On the one hand, they seduce us, we want them to contain, include and involve us; on the other hand, they demand that we become more ‘machine readable’. We pay for inclusion and involvement by becoming more like machines.

We use these tools to engage with the world; and in time they come to mediate the world. The more extensive and nuanced our technology, the closer it insinuates itself into the self, until we cannot extricate ourselves, having no selves to extricate. Dogs sometimes resemble their owners; less happily, we tend to resemble our tools: “the cyborg is now the ideal to which all our most advanced technology is tending.” One sees this in advanced bureaucracies: the machines have won – not the machines which enslave us, but the machines we become. The most successful employees, in such systems, are the most mechanical, efficiently unquestioning; not at all awkward, no longer fully human (one could say that human beings have the unusual ability to cease to be fully human). This is far from Hamlet’s “what a piece of work is a man”: now, technocrats gush about replacing men with machines; and authority resides in the inhuman, not in nature but in machines. Anything not constructed by man is despised. The authentic human voice is disregarded as of no consequence, as one sees in the misleadingly-named Humanities, where modern academic prose aspires to a vilely convoluted ugliness, and anything remotely human is dismissed as “not academic style”. The human is disreputable; this to the point where no one can distinguish between fake and “authentic” academic articles – because, in a sense, they are all fake.

The anti-human tendency reaches its apogee in the celebrity:

Our first robots are not made of metal, they are made of flesh. The men without chests, foreseen by C.S. Lewis, have arrived. Celebrities are a dry run for the fully machine future, our best robots yet. They are, in fact, what roboticists would call ‘affective robots’; they display emotions with which humans can identify. Whether they actually experience these emotions or not is beside the point.

At this point, any genuine humanity shocks, and offends. The authorised version is the machine. It is not surprising that scientists often seem to be missing some part of the soul, to be immune to art, to wonder; so Bill Gates:

He seemed to find art itself a kind of puzzle. When he talked about digitising all the great paintings of the world so that they could be available on The Highway, he did so with the air of a man who was interested in art as a phenomenon, rather than something felt. It was something whose significance in the lives of others he found puzzling. It was a code to be deciphered.

This attitude is not uncommon among scientists: for example, a chemist I know regards films and books as nothing more than information, and if he knows how a story ends he is incapable of enjoying the narrative. His favourite film is Transformers, because it has robots and explosions. Such masters will create the new world, where machines will supersede the human:

The last machine we will ever build will be able, in theory, to boot itself into successively higher levels of intelligence and solve all the problems of our present condition – answering the outstanding questions we have about the nature of matter and of consciousness, curing our diseases, rendering us medically immortal, painting our masterpieces and writing our poems.

If indeed human consciousness is just a load of zeros and ones, then this may be so. However, it isn’t, so it isn’t. Computers can only process quantitative data – numbers; the experience of being alive is qualitative. Money is supremely quantitative – so a five pound note is equivalent to five one pound coins. The experience of being alive cannot be wholly modelled. It can be represented but the representation is not the thing itself. If I could see things exactly the way you do, I would be you.

It is not possible to abstract the human experiencer from our reality. Quantitative methods (i.e. science) are an attempt to bypass the zen koan: if a tree falls and no one hears it, does it make a sound? And this is why Appleyard’s anecdotal approach works – he makes no attempt to remove himself from his experience; he allows the circumstantial details, the colour and texture: these are part of it, being as it is a human book, a good book. As he writes of Dickinson and Stevens: “the place where the mind ends and where the world begins is unknowable.”

He closes with a chapter about David Hockney, who has adapted Apple products to his own needs, the artist master over his tools. That Hockney can use the ipad without compromising his art suggests the problem is not that we have technology, but that we adore and serve it. Perhaps it is only a matter of time before Apple release the i-dol. The problem is that our idea of the human, once the image of god, is no longer reputable. So, almost a century ago, Rilke (tr. M.D. Herder Norton):

Only in boilers now do the former
fires still burn, heaving the hammers that grow
always bigger. But we, we diminish in strength, like swimmers.

Appleyard is right to write as he does, aware of all the weird detail a machine would miss; and he is right to look to Hockney, and to Emily Dickinson and Wallace Stevens. It is in the human imagination that all this begins and ends.

Share This Post

About Author Profile: Elberry

elberry@thedabbler.co.uk'

14 thoughts on “The brain is weirder than the sky…

  1. walter_aske@yahoo.co.uk'
    elb
    December 7, 2011 at 12:55

    can you indent one of Yard’s quotes so it appears as a quote, it’s this one:

    He seemed to find art itself a kind of puzzle. When he talked about digitising all the great paintings of the world so that they could be available on The Highway, he did so with the air of a man who was interested in art as a phenomenon, rather than something felt. It was something whose significance in the lives of others he found puzzling. It was a code to be deciphered.

    • Brit
      December 7, 2011 at 12:57

      Done.

  2. Worm
    December 7, 2011 at 13:21

    So would it be right to say that Bryan is, in a roundabout way bringing it all down to a fight of religion against science, where you have to decide to have a belief in which side offers the best form of immortality. Interesting discussion as to whether BA would think that “The last machine we will ever build” would be ipso facto ‘God’ or just a computer.

  3. peter.burnet@hotmail.com'
    Peter
    December 7, 2011 at 14:07

    for example, a chemist I know regards films and books as nothing more than information, and if he knows how a story ends he is incapable of enjoying the narrative.

    Is it possible it isn’t so much that he sees the narrative as information but that he sees everything of interest and meaning in life as problems to be solved? Modern technology has been a wondrous boon for man the problem-solver, even though it doesn’t solve nearly as much as he thinks, but I’m not sure it’s the root of it.

    Several decades ago, before Gates and Jobs, I had a debate with two young architects, with me playing the curmudgeon lamenting the banality of modern architecture. They explained articulately how all the training and mindset of modern archtitects for several generations had been geared to solving problems, mainly social problems stemming from dark, crowded, unhealthy housing. It was all about light, cleanliness, conformity and function. When I pressed them about beauty, aesthetics, community, the human soul, etc. they readily agreed with my suggestion that, if a cathedral or the Parliament Buildings were to burn down, it would be impossible to replace them, as so many of the crafts and skills had been lost forever. Despite their honesty, they were really as untroubled as I suspect Bill Gates would be if someone bombed the Louvre. He’d just see an opportunity for some new software that would, he would predict with total conviction, make the world’s youth all art conoisseurs. I felt like Charles Ryder cut to the quick by the sale of the Marchmain’s London family mansion to make flats.

    Perhaps the apogee of this alliance between modern technology and a problem-solving mentality is the transhumanists who dream of vastly expanded lifespans. The huge advances in medical technology spawned by computing has convinced them it’s just around the corner. Anyone who has ever engaged them will know that they will bandy each and every practical and even experiential objection with a robotic faith in our ability to solve them all, and they will do so with the sunny, “we love you” patience of a Jehovah’s Witness and the “can-do” fundamentalism of an American engineer. It’s only when you suggest life has a decidedly crappy side and that an exit after three score tiring years and ten has a lot to be said for it that they begin to see they are up against an advocate for darkness.

  4. Gaw
    December 7, 2011 at 15:10

    Fears about what machines will do to people’s minds are as old as machines. Yet humans remain pretty intractable. I don’t think these technology entrepreneurs and their companies are really in control of much and I think they’re painfully conscious of this most of the time.

    Their companies are tossed around by the choices of millions of individuals; they sometimes even find themselves capsized by changes in what suits these humans. Look at the vagaries of Apple. Then there’s Microsoft, which is a shadow of what it was when Gates was interviewed by BA in 1994 (will it even be around in the same form in ten years time?). Not to mention many supposedly dominating and controlling companies that customers simply walked away from; AOL and MySpace spring to mind.

    An excess of ‘call trees’ seems to me a sign of commercial weakness, driven by the need to cut costs in a competitive environment. However, they’re increasingly being abandoned outside the bargain basement: the costs saved aren’t worth it; customers don’t like them, they walk away.

    I work closely with people who earn a living in the digital world and the human is at the forefront of what they do. It seems slightly ridiculous even to make this observation; they can only be useful if their product appeals to humans and fits human ways of doing things.

    What’s more, it seems to me that this contemporary technological culture is almost matter-of-factly concerned with disruption and freedom. One interesting indicator is that hiring talented creative technologists is increasingly difficult for big companies: technology has made these institutions less important as places you need to go to in order to get things done and get paid for doing them.

    The ‘maker’ movement (here’s a recent Economist article on it: http://www.economist.com/node/21540392) is enabled by technology. It may well be the first signs of a new more anarchic, distributed and, though I use the description tentatively, ‘human’ form of production. I suspect technology is going to become more and more enabling of human-scale activities.

  5. peter.burnet@hotmail.com'
    Peter
    December 7, 2011 at 16:36

    they can only be useful if their product appeals to humans and fits human ways of doing things.

    With respect, Gaw, that’s an ambiguous assertion. Other than from a libertarian perspective that equates market success with well-being (or rather asserts it’s the only way to measure it), how can the the modern, frantic, addictive world of mass texting/cellphones/social networking with their compulsive need to be in touch 24/7 and largely banal communications be said to appeal to a “human way of doing things”? Surely it has left the functional world of speedier purposeful communications far behind?.

    • Gaw
      December 7, 2011 at 17:02

      Why assume ‘human’ necessarily equals good or wholesome or efficient? I think I’m arguing that humans in the generality tend to remain intractably unlike machines as well as damnably difficult to control and that this may be becoming more the case.

      BTW doesn’t a Twitter timeline evoke images of monkeys hooting and chattering in the trees?

  6. peter.burnet@hotmail.com'
    Peter
    December 7, 2011 at 17:57

    Why assume ‘human’ necessarily equals good or wholesome or efficient?

    I don’t. Murder, jealously, betrayal and genocide are very human. But to throw it back at you, why describe whatever is all the rage at any given moment as ipso facto human?

    • Gaw
      December 7, 2011 at 19:49

      As humans are the only creatures who get caught up in what happens to be ‘all the rage at any given moment’ is it reasonable to describe the current rage as particularly un-human?

      To take a longer view, I struggle to think of a more machine-like institution than the medieval monastery, which I think I’ve read (I forget where) described as a ‘prayer-machine’ (or something along those lines). And is there a more cog-like existence than that of the pyramid-building slave (but I’m ready to be corrected by the author of this post, who has some insight into that subject)? And the ghetto seems socially a very 24/7 sort of place.

      But my main point is that I think people and their peculiar brains are far more in control of the way life is lived than tech moguls.

  7. mcrean@snowpetrel.net'
    Mark
    December 7, 2011 at 19:24

    I very much like this review which strikes me as spot on if I understand it correctly. Bryan Appleyard’s books and articles are in some regards the literary equivalent of street photography which – in the hands of a master anyway – tells us something of what it was like to be alive right at that moment, which may be in an era or place very different from our own. He’s good at this because in looking at the way we live and think, and especially at the big ideas which flow through our age, he brings a much richer range of references to bear than most other writers. It’s easy to get caught up in IT and the torrents of words that pour out of Silicon Valley, but Mr A is after bigger fish.

  8. walter_aske@yahoo.co.uk'
    December 7, 2011 at 20:58

    The Yard’s book is more nuanced & subtle than my review. i dislike most technology and would happily live in a cave, beating bears to death with a club and howling at the moon. i believe Yard isn’t anti-technology but rather concerned that technology should not overly determine our sense of what it is to be human. Human beings seem to fashion their idea of the human by reference to other things – e.g. god or gods, and now machines. Gods were if you like a “supreme fiction”. Phoebus Apollo is not dead, ephebe.

  9. andrewnixon@blueyonder.co.uk'
    December 7, 2011 at 21:57

    Gaw makes some very interesting points that cast a new light on it.

    There’s a gap between the noble intentions of technology innovators and the consequences of their inventions, a good example being the way that social media has created new ways for teenagers to bully each other, sometimes to the point suicide. Bullying is all too human, the innovation makes it worse. Where there is hubris it is in the confidence with which inventors assume that humans will use their inventions in the way they intend them to be used.

    (Incidentally, Bryan’s book raises more questions than it delivers opinions. In the surrounding publicity he seems more argumentative, probably for marketing purposes).

  10. peter.burnet@hotmail.com'
    Peter
    December 8, 2011 at 00:42

    and would happily live in a cave, beating bears to death with a club and howling at the moon.

    As one would expect of a species that evolved to be hunters and gatherers. But…oh, never mind.

  11. walter_aske@yahoo.co.uk'
    December 8, 2011 at 09:00

    i don’t evolve, i merely persist.

Comments are closed.