Category Archives: psychology

Assemble Your Own Belief System

Since my adolescence, I’ve never had a more than objective interest in religion. As a little kid, I trusted my Dad as I recited the Lord’s Prayer with him at night while he tucked me in. Back then, it was all the God Blesses wished upon my family members that felt the best. They were simple wishes of love, not complicated by old-sounding words that I sometimes couldn’t remember.

Back then, my baby-kid mind didn’t have any picture of God in it while I followed along with my Dad, saying “god bless Kim, and god bless Poppy” . It was just another way to say “please bless them and take care of them”. Back then, it was easy to ask an invisible, unknown authority for help. You were used to trusting and relying on someone bigger than you. Maybe as I looked at my Dad’s face while repeating the blessings, I was really asking him to protect everyone. It was him I trusted to protect us.

By about the age of eight or nine, I started appreciating some principles of science, and I was especially curious about dinosaurs and archaeology. Finding a box full of National Geographic magazines in my grandpa’s basement was like discovering buried treasure. I flipped through all those National Geographics with enthusiasm. I learned who Dr. Louis Leakey was and why the million year old skulls he dug up in Africa were important discoveries. I saw the colour, age, and vibrancy of distant cultures, and I learned about the shape of the world. I didn’t understand all the words in the articles, but they showed me a wide, strange world outside the bounds of my town. The world I lived in was just a tiny link in a chain of rises and falls that had happened over thousands of years, and as far as I’d seen, nothing in the modern world matched the wonders of ancient Egypt. It was scary and exciting to think that the physical world was such a vast, complicated, alien, and almost uncountably old place.

By my tweens, I regarded religious fervor and religious believers – especially those in my immediate family – with scepticism. To me, God and Jesus were unbelievable fantasies for others to adhere to, but they weren’t authentic for me. At that young age, I had very black and white thinking: I saw no difference between the incredible stories written in the Old Testament and the lying, hypocritical TV con artists who tried to evangelize ten dollars worth of prayer out of my auntie’s purses. I decided that I knew the difference between reality and fantasy, and I could smell BS pretty well.

I have one memory of attending Sunday School in Grade 3: I remember being confused by the blonde, short-haired, clean-shaven Jesus Christ in the religious storybooks we were given to read. Jesus looked like a Marine or one of the Beach Boys, not like a zealous, self-sacrificing Son of God. Even at eight, I knew that the image was a falsehood and a manipulation. Thank God one of the kids started eating the library paste and cracking us all up, otherwise, Sunday school would have had no redeeming moments at all.

My suspicion of that Beach-boy-Christ was definitely my dad’s religious cynicism seeping from my pores. My dad was his own leader, writing his own commandments for us kids to follow, with my mother as a generally-passive follower. Dad was stubborn and proud, and had no time for interference from any omnipotent, invisible organizations, or their earthbound representatives.

Nowadays, I tend to look at Christianity as an outsider, like how an anthropologist from one culturally-biased background might view a different civilization. I considered myself to be standing at the edge, observing from a distance, although truly, each of us stands squarely at the centre of our own biases.

Other Ways of Understanding Things

By eighteen, I understood some basics of physics, electronics, and radio, and had read a little about Sigmund Freud. I was becoming keenly aware of the disparity between the external world and my internal one. Externally, sunlight filtered through leaves on the trees outside my bedroom window, and RF radiation was all around me, resonating through everything and beaming out into space. Internally, my life was contradictory, and the adults I knew were mostly hypocritical and flawed. We each had muddled, conflicted, and complicated mental networks. Maybe they could be explored and untangled with time and care.

As I verged on adulthood, I anticipated the freedom and absolute responsibility I might face in the years ahead. Would I find someone to love me? I was sure it would be a girl, but would there be love? Would I find a career I would enjoy? I had no clear idea what I would do. I only knew I loved visual art and stories. Fantasy and escapism had practically saved my life, insulating me from the hard realities that faced me too early. Could life improve and would I be happy? Maybe I really wanted to escape and to take a chance, but I wasn’t quite ready.

Looking through the lens of science, I’d started to feel what might be the same wonder that I’d read theologians express when contemplating God’s creation. At the H.R. Macmillan Planetarium, I looked at a poster-sized photo showing a densely-packed field of glowing dots of light, and I learned each glowing dot was an entire galaxy. There were thousands of them in the ladge photo. That was amazing enough, but the real punchline was that the photo had been blown-up from a one square centimeter piece of film. The vastness of that scale just blew my mind. Outer space still fascinates me.

Years later, I read that St. Thomas Aquinas wondered “how many angels can dance on the head of a pin?”. Whether it was a sarcastic comment or a serious one, I’ve decided that even if science one day delivers an answer to dear old St. Thomas, the act of wondering at the vastness of the cosmos is not too dissimilar from musing on angel-pin occupancy in pursuit of almighty knowledge.

All of these disparate realms stimulated my curiosity. They made me wonder what mysteries were around the next corner and how much farther humans could go in the future.

Nothing to Tie it All Together

By about the age of nineteen, I began to realize that I saw no overarching framework to unify all the different kinds of information and values I’d gathered from my disparate sources. Nothing seemed to unite the physical world with the mental or spiritual worlds, and nothing brought the ideas of faith together with logic, or equated belief with common sense. All my little networks of facts and so-called truths seemed to be spoken in different languages, or measured using different scales.

In art school, the Foundation level of my art education helped me to begin integrating aspects of art, science, and perception. My first year of art college brought novel new unities between physics and perception. Initially, this blending started to emerge through my education in the experience of colour.

Hearing my art school instructors talk about the electromagnetic spectrum was the beginning of my understanding of the integration of art, science, and technology. Seeing how coloured lights mixed to create secondary colours (and even white light) helped me to connect the sensations of experiencing colour with the idea of the electromagnetic spectrum, wavelengths, and visual perception. The dogmatic divisions between art and science started feeling artificial, and it was a wonderful realisation – like discovering a grand unifying secret. The integration of new ideas gave back more than you realized: the whole was truly bigger than the sum of its parts.

Tendencies, Handed Down or Cultivated

The reason that I craved integration was likely because my world had always felt so fragmentary and disjointed. Life seemed rife with contradictions, and nobody really made it all make sense for me. My Dad, James, was a technically-minded man who never talked about subjective, interpretive experiences. Since we’d arrived in Vancouver in 1975, he’d been an Electronics Technician at the TRIUMF particle accelerator at UBC. Every day, he dealt with electricity, mechanics, and proven principles. He preferred ideas that seemed solid, immutable, and reliable, and he believed in math, logic, and common sense. He was the first person who told me about the law of conservation of energy (“energy cannot be created or destroyed, only transformed”). Whenever I badgered him to tell me about his day at work, he’d grudgingly talk about beam lines that move at the speed of light, gold targets that smash off new particles, ion streams, mesons, and a particle beam that would one day be used to kill cancer cells. It all sounded way cooler to me than he seemed to think it was. He worked with high-powered RF and electrical systems that supported the Cyclotron, TRIUMF’s world-class particle accelerator. To me, it sounded like stuff from one of my Fantastic Four comic books.

Dad spoke about Einstein with the same sense of appreciation that I have when I speak about Stephen Hawking, and with his occasional stories, he helped convince me that the world is smaller, larger, faster, and more dynamic than I could imagine. It was likely because of my father’s influence that I desired a scientific answer to every question.

In contrast, my Mother Angela was a creative person at heart, trained as a singer and musician, and in her twenties had been active on the amateur stage with the Gilbert and Sullivan Society in her home town of Victoria. It always seemed like Angela’s best days happened before she met my Dad, back when she was singing, playing piano or violin, or drinking with her friends. She seemed like someone who was more “in the moment” than worried about the future. Put her in front of a piano, and she would come to life and burn up the room with some energetic boogie-woogie. Otherwise, she seemed silent, and maybe sad or bored most of the time.

The artistic streak ran through Angela from her father, Ernest (my namesake) whom we nicknamed Poppy. Poppy shot thousands of photographs of Angela throughout his life, and he painted landscapes in oils later on in his senior years. Angela was the apple of his eye, and his only child.

Nobody at home really talked about art, but at Poppy’s house it was around us in little, everyday ways. Poppy had a sense of class and style. His furniture was older, upholstered and carved wood, and little cut glass ornaments decorated the mantle over his fireplace. His couch always had some pretty oriental fabric thrown over it, and he dressed himself in a shirt and tie and leather shoes almost every day.

I was never discouraged from comic books, cartoons, colouring, drawing, or from daydreaming. Philosophy was revealed in bite-sized chunks, through funny sayings from Popeye or Groucho Marx. Punny poems by J. Ogden Nash would be recited at the kitchen table, or cute ditties from the forties and fifties would be re-sung, getting lodged in my young head. Humour and creativity seemed to be a part of my Mother’s home language when we all lived with her father Ernest In Victoria. Her happiness at being with him was probably a major factor in her overall happiness in life. Life was treated as something to be enjoyed whenever possible. Seeing my Mother laughing, singing, and acting lively were the best moments that I can think of. Her happiness was rare and infectious.

As I got older, Mum was often quiet, struggling with bouts of depression and saying very little. Lateron, reflecting on this would encourage me to wonder about mental illness and psychology, and to speculate if my Mum could be cured or not.

I can’t say that she ever really taught me anything directly because she rarely ever even spoke to me or my sister. Instead, I ended up learning about her by listening to the stories my Dad told about her, and by watching her behaviour and listening to her rare words – I watched the performance that Angela gave as my Mother, and I tried to draw out some moments I could enjoy, and some lessons I might use.

I learned to recognize qualities in her that I saw in myself later: we had the same green eyes, we loved music, art, and the movies. Mum had acted and sang in musical theatre with the Victoria Gilbert and Sullivan Society, and later in my life, I realized that I love live theatre and music too. I took to many of the jazz and pop musicians whom Dad had told me that she’d loved in her youth, in particular, Oscar Peterson. We still have a few vinyl LPs that belonged to Mum. I can try to hear her voice by listening to the music that she liked.

The Hybridized Man

I realized by 19 or 20 that I was really a split human – a hybrid of him and her, mother and father, and their individual qualities. I had his lines on my forehead and her colour in my eyes. I knew I was artistic and creative, nervous, and introspective. I was also technical, curious, and resourceful. I had a bit of an ego like him, but could be gentle and insecure like her. If I was pushed, I could generate his power and authority in my voice, all while feeling her nervous butterflies swirling around in my stomach.

Finding computer graphics in art school gave me a perfect middle ground between art and technology. I could express my creative and visual design ideas, while gradually learning about the electronics and mechanics of the devices that made it all possible. The world was going more digital every day, and Stewart Brand of the MIT Media Lab was describing the start of the convergence of the Print, Broadcasting, and Computer media which, a generation later, has utterly changed our society. Back around 1986, it was still at the start of a brave new world.

Gradually after four years of study in drawing, art history, multidisciplinary art, and visual literacy, my grad projects came together as interactive electronic and graphical constructions that explored the relationship between viewer/participator, moments, and actions. It was 1989, in a time when terms like “user interface” were more likely to be heard in the offices of companies like Nintendo, Apple and Microsoft, not in an art school.

The next giant leap for me would be six years later, when the World Wide Web became popularized and started to homogenize and automate online information. By 1995, I was an art director at a small software developer, and riding the line between art and technology every day. The web became a meta-medium that absorbed and presented other media for multisensory experiences that transcended platforms and geographies. Basically, the web changed everything and 25 years later, it still feels to me like the medium to integrate all media.

Paths to Theories About Everything

Artists and multidisciplinary practices showed me the ever-blurring boundary between creative and scientific principles. Spiritually and philosophically, reading about Buddhism has drawn hugely important connections for me between ideas like hope and despair, and between the material and the immaterial worlds. Visualizing the interdependence of all things, and the suffering inherent in being alive has helped me to understand the difference between nihilism and peace of mind. I began to feel that letting go isn’t the same as not caring, and that love can be present and unwavering without having to be insecure or needy. A little peace of mind seems to make everything feel a lot better. Even if I cannot feel the satisfaction of knowing how all the parts fit together, I can at least feel more at ease with my not knowing.

Physicists have pursued a theory of everything for centuries, and whether conceit or truth, they believe they’re closer than ever to finding it. I believe that this is science’s main conceit, in its comparative youth, taking a journey down a path that’s been well-trodden by religion and philosophy for millennia. For me though, science is still the great, evidence-based system to rely on.

Ultimately, we each walk our own path on our own legs, peering out from behind own our coloured lenses, trying to bring our personal version of meaning into focus.

The great philosopher Dr. Seuss once said “Oh, the places you’ll go!” In other words, it’s about the journey, not the destination.

Digging and drawing on the unknown…

Not long ago, I revisited an old idea with a friend at work: The Exquisite Corpse drawing game. We wanted to use it as a way to encourage some asynchronous play-activity among members of our busy and dispersed work group, to share or generate ideas, and to maybe generate some humour and surprise by chance.

The Exquisite Corpse game developed originally as a writing activity where participants contributed successive lines to a hidden story and revealed the full results later. The name “exquisite corpse” came from one contributed sentence from one particular game. It evolved into a drawing game where players would add sections to the end of each other’s drawings, without seeing the previous contributor’s work.

Surrealism evolved out of the Dadaist movement after World War 1. It was originally literary, expressed in poetry, prose, and sometimes through an experimental activity called automatic writing (and as I recall, another term for this may have been “psychic automatism”). The Surrealist movement was driven by poets and writers like Andre Breton, painters like Freida Khalo and Salvadore Dali, and photography and film artists like Man Ray.

Surrealists were deep explorers of internal landscapes and of the meanings that emerged from the juxtaposition of seemingly unrelated symbols. They were interested in exploring subconscious imagery; the themes and symbols that lay beneath the conscious mind, such as dreams, non-verbal desires, or primal urges.

These ideas were inspired by the development of psychotherapy (Freud, Jung and others) and the popularization of ideas like the collective unconscious (Jung). Later, in the 1950s, Beat-generation writers like Burroughs and Kerouac used similar techniques for their own purposes.

Personally, I’ve found a lot of satisfaction in using collage of magazine and newspaper imagery to create unexpected images. Whereas in the exquisite corpse game where each participant hides their contribution from the next player, a solitary collage doesn’t involve other people, but still provides unknown directions or unexpected ideas from moment to moment, based on a somewhat-random selection of visual elements, and any haphazardness or chances taken in how images get cut up, torn, and recontextualized.

I usually start with a large plastic bin full of magazines and scraps from coverless comics or newspapers. I just reach in and pull out as much as I think will cover the sheet of paper in front of me. Sometimes a few scraps will be so visually strong that they’ll drive an idea to be formed around them. Other times, a theme only emerges after a few pieces have been placed to set some scene (like sky, ground, trees, or buildings). I may use tape to tack things down, and use glue or rubber cement once a piece has remained in place for a while, as I build and develop things around it.

My themes almost always involve the human figure or some almost iconic form, and emerge from my internal themes of power, helplessness, mother/father, joy, ego and pride, fear of the unknown, virtuous ideals, sexuality, or pain.

For me, it works like a kind of visual self-talk therapy; a way to build a personal mirror, and to explore what stares back at you.

Have mobile web devices un-widowed the “Computer Widow”? #edcmooc

Back in the late 80s and early 90s, there was a term called “The Computer Widow”. This referred to the wives who hardly ever saw their computer-obsessed husbands, except from the back.

It’s a morbid metaphor, but served a purpose: obsession with computer-based work or distractions took time away from relationships, leaving wives feeling bitter, abandoned and effectively “widowed”. (This also speaks to the predominantly male-oriented computer and web culture that has more and more opened up to gender equity as the years have passed.)

I’m sure there were Ham Radio widows in previous generations, or partners of inventors or hobbyists whose work was obsessive and nature and revolved around a stationary set of tools.

Now that large desktop computers and wired network connections have been largely replaced by ubiquitous wireless handheld devices, our behaviour and expectations are different.

Since I got my first fully web-enabled smartphone in 2009 (a Palm Pre), I began breaking a 10 year habit: instead of checking my email and surfing the web at my desktop PC each night, I began reading online news and managing my email on my smartphone multiple times per day. This has sometimes caused me to be one of those distracted people, reading my emails in the car or in bed at night, but generally, I think it’s been a huge improvement in terms of convenience and access. Now I only sit at my PC once or twice per week, and when I do, I’m amazed at how few messages come through to my desktop Inbox. I’ve been doing all my email reading, managing and deleting from my phone or sometimes my tablet. Those mobile devices have become my access points, and come with me to bed, the bathroom or in the car, and most of the time, this is an absolute convenience. I do think that my wife is feeling much less widowed in 2013, than she might have felt back in 2000. Now, we both compute and communicate wirelessly, and we can do it together at a coffee shop, chatting and commenting (or at least acknowledging each other) while we tap away at our respective work or hobby projects.

For me, mobility has definitely improved and alleviated the technology “widow” factor. Is this the same for others? Does being preoccupied in other locations or on the road make the preoccupation less of a problem? Does it allow busy people to get on with their lives, moving from task to task, or to different social situations, while staying connected or productive online?

Or, does it just allow us to be distracted by cyberspace while risking social dysfunction in real-space?

Fragmented and Unrecognizable Contexts

In the pre-mobile days, the context for an activity was largely recognizable by physical location, or unambiguous use of a particular device. In the analog world, you used a radio to listen to airborne audio, and you used a telephone for person-to-person voice communications. Other people could see that you were on the phone, or hear and see that you were listening to the radio.

Ubiquitous mobile (and soon wearable) computing and wireless communications makes this third-party recognition much more difficult: you may have to deal with lunch mates who are repeatedly distracted by their phones, or send text messages or tweets while they’re supposed to be paying attention to that fascinating story you’re relating about your dog. It’s hard to tell if someone who’s talking to themselves as they walk down the street is schizophrenic, or having a phone conversation on a bluetooth earpiece.

As ubiquitous computing and communications evolve and the boundaries between man and machine become less distinguishable, it’s going to get weirder and ore difficult to recognize when you are being conversed with or interrupted by another person.

Reporting Life: Creating blog musings, scribbles and other artifacts…

This is like an inventory of things I do to express myself. I don’t know why I nee to do a catalogue, but it feels right – like emptying a closet before you reorganize it.

Writing

  • I post musingsand observations to my blog. Theseoften are like a journal of reflections, or

    some passing whim or temporary interest.

    • I tend to returnto the same themes in the course of 12months:

      comic book and graphicartists, like Will Eisner, E.C. Segar, Jack

      Kirby, or Alan Moore, and iconic characters

      like Popeye and Superman.

    • I recall emotional patternsfrom my youth, particularly regardingmy Mother and Father, or themes of loss,

      responsibility, persistence or hope.

    • I try to connect cool ideasor inspirational movements across eras, oracross media or disciplines.

      Sometimesthe expressionist films like Metropolis will

      lead me to the Bauhaus, which will lead me to

      the new wave band DEVO, which leads me to

      underground cartoonist Robert Crumb, or the

      Cult of the Subgenius and concepts of

      devolution, or to the movie “Idiocracy”. I find

      it interesting that some of the same ideas seem

      to “infect” both high art and low art in

      similar ways.

Visual Art

  • Occasionally, I’ll do adrawing, sketch, of collage,to document a state of mind.
    • Sometimes, it’s a sketchyportraitof the back of a stranger’shead, just to see if I still have enough eye-

      hand to render someone representationally, or

      to see if my Playbook tablet can be used as a

      sketching tool with as much effectiveness as a

      brush-pen.

    • Sometimes, it will be alittle diagram or design scribble, tohelp me sort out a design idea.
    • Sometimes, it’s a crazy,colourful collage, using a plastic binfull of scraps of images culled from hundreds

      of magazines over the past dozen years. This is

      the most fun of all – like putting together a

      strange Freudian puzzle out of irregular

      pieces, and with no box cover to show you the

      final product.

It’s all about some kind of creative output.

Thought Precedes Action

But inspiration for a creative act or artifact most often comes after I’ve internalized some cool information, or someone else’s cool

art. More often than not, some kind of

stimulating input will have inspired me to

synthesize something for myself: It’s important

to listen to music or to look at art by artists

whom you admire, or whose vision or message resonates with you.

It comes and goes. I need to hear or see something that makes me laugh or makes me go

“wow”.

It will trigger something inside me – a

response, a dredged-up memory, or a forgotten

sense of self. I will ask myself who I am now,

or how I want to feel. I will create an

artifact. I will need to make a mark.

Everything in that last paragraph can happen

very rapidly, like a sensory-response, or at
the level of muscle memory – subconscious, and
not even clearly or consciously articulated.

Garbage in, garbage out. Garbage in, Gold out. Sometimes copper. Most often, pixels or paper.

It is what is is: a response-loop that simply has to happen. Without it, I think I’d get ill or be too nervous.

E-learning and Digital Cultures, Week 4: Is Google making us stupid? #edcmooc

Go_SlowIn Week 4 of the MOOC E-Learning + Digital Cultures, one of the “Perspectives on Education” articles  asks the question “Is Google Making Us Stupid?”.

I must admit that I’ve sometimes asked myself a question very similar to this. I’ve asked myself “Am I losing my short-term memory?” or “Am I losing my ability to concentrate for long periods of time, or to read long passages of text in one sitting?

I do believe that over the years of web surfing (which I’ve been doing since 1994 or 1995, when the first browsers became widely available), my ability to concentrate or my pattern of reading – the way in which I consume words – has been modified by the activity of surfing online. I do feel as if the hypertext, hunt ‘n click web has modified my behaviour. I can feel that I’ve become more of a browser than a reader.

Has Using the Web Trained me to Click Instead of Read?

It’s a fair question,  as if the online world of information is like an endless, all-you-can-eat buffet. I may be in line to put together a meal from beginning to end, but the act of gathering what I need comes in little chunks, with possibilities for distraction at each new connection point. I’ll take a little bit of one site, a little bit of the next, etc. etc. Skip, skip, skip. Click, click, click. It’s more like an endless stream of consciousness, and it’s easy for me to get drawn off-course from an original train of thought onto something completely different. I think it must be the combination of my own curiosity and the seemingly endless array of links to other destinations.

But There’s a Physical Difference to Reading Online too…

I have always read, and I still love to read – novels, magazines, comics and graphic novels, and now more than ever, news and current events. But, I find reading from an LCD display to be much more difficult than reading from paper. Consistently more difficult.

I read a lot of text online, but it doesn’t mean that I’m no longer capable of reading a novel 0n paper. I love reading paper books and magazines (and even the occasional newspaper) – I’m just not quite to used to it as I was before.

So, I think the physicality of reading off a back-lit display of pixels (i.e. teeny little Light Emitting Diodes), combined with the click ‘n browser nature of hypertext brings me to a McLuhan-esque “Medium is the Message” realization:

I’m not getting dumber because of the Web, but I do think that the Web itself makes me read in a shallow way.

You know. The web made me do it.

#edcmooc

E-learning and Digital Cultures, Week 4: Redefining the Human #edcmooc

Week 4 of the MOOC E-Learning + Digital Cultures explores the theme of “Redefining the Human”.

I think the over-arching message this week is that our concept of humanity has become a relative and subjective thing. These videos explore that idea in different ways and different genres.

Robbie – A Short Film By Neil Harvey from Neil Harvey on Vimeo.

Robbie tells the story of a space-faring android who is the last occupant of a space station orbiting the earth. I could easily tell that this film was composed entirely of stock footage, but then again, how easy would it be to shoot your movie on the space station (or a realistic, earth-bound mockup). Nonetheless, the repetitive, stock footage appearance of it put me off a bit.  Aside from that, Robbie is an engaging tale about  survival, loneliness and angst from the perspective of an artificial intelligence.

I don’t know if 4000 or 6000 years of feeding its neural net with information would result in an android that would have dreams – literally flights of fantasy – and not for one moment did I buy the premise that Robbie wanted to be Catholic.

I’ll say that again: Catholic. I’m not anti-Catholic or anything, but such a specific choice of religion seems out-of-place. Is the author of this piece likening Robbie the Robot to Jesus, by virtue of his symbolic impending death (and do we presume, rebirth?)

My expectation of an autonomous, artificial intelligence would be that it would be somehow more neutral, probably atheist or maybe humanist. It either wouldn’t believe a religion or perhaps it would believe in the species which created it. Okay – so, I’m an atheist and I have a hard time with that aspect. I’ll leave that point alone, and get on with it.

No – I just cannot leave the religion aspect alone on this one…

The idea that a robot with what we consider to be A.I. would care about one religion over another probably says more about the film maker’s attempt to imbue his protagonist with some kind of “soul”, so that the viewer will empathize with him. “If the robot wants to believe in God, then he must be more like me than I thought. If he could consider accepting God as his creator, then he must have a higher level of enlightenment, just like a human.”

If, however, Robbie were to possess the actual mental engrams of a former human being – if a human being’s actual thoughts and personality could be transferred into Robbie’s memory and mechanical frame – then THAT would convince me to feel sympathy for Robbie’s plight (his curse of immortality).

But so long as I believe that Robbie possesses a 21st century version of artificial rationale, I can never consider him conscious, and so I will never accept him for much more than a glorified electric screwdriver left behind by a space workman. How cold-hearted am I? I just didn’t buy into this movie’s attempt to tug my heart strings.

Gumdrop was a sweet little comedy, and a gentle visual sleight-of-hand. By substituting a young human actor with an android auditioning for an acting job, we end up starting to think about the values and hopes of the young actress, mechanical or not. Gumdrop was a light-hearted examination of the casting call too: do we treat each other like commodities or machines? Does the audition process demean the female actor? Should human actors be worried, now that we live in a world where lots of supporting and lead characters only exist in an animation database, but never in the physical sense?

Gumdrop’s vacuum cleaner gag was very funny. But, does that mean she’s really just a glorified Rosy the Robot? What happens when the acting career is finished, or when she outlives her warranty? Will she get literally dumped on the scrap heap?

For some reason, I care about Gumdrop more than Robbie. Maybe it’s the human motion and voice. She’s much more likeable than Robbie. Like they said in Pulp Fiction, personality goes a long way.

Maybe one day soon, Honda’s Asimo walking robot will be able to audition for Survivor or something.

Hm. Robot Survivor – I’d probably watch that…

TRUE SKIN from H1 on Vimeo.

True Skin is an extremely well-made, and convincing film. Very Blade Runner-esque. Great Raymond-Chandler-inspired dialogue. “Their eyes burned holes in my pockets” was a brilliant line.

So, the one thing all these films have in common is that they live or die by the quality of the plot and the dialogue. Yay, human writers!

In terms of the humanity proposition of this week, I think this film does the best job of articulating some major issues:

  • If there comes a time when we can no longer define or recognize humanity by its fleshiness, will it still be considered human? Is a cyborg who is less than 50% flesh and bone still a human being? Maybe the more metallic and less meaty we become, the less human we will be perceived to be. Ben Kenobi said of Darth Vader: “He’s more machine than man now, twisted and evil.”
  • On a personal level, if a friend of mine had their thoughts transferred into a little computer, and I could interact with them (either text, or maybe Max Headroon style on a display screen), would I still consider them human? Probably not, if I could put them into Standby Mode, or turn them off, like any other device. So, maybe autonomy and self-preservation are other key aspects of being a sentient being?

I loved Avatar Days. The simple concept of transplanting a fantasy persona into the owner’s real-world life and society is an extremely powerful thing. It’s done so matter-of-factly and carefully that it becomes a real artistic social statement. Coolest of all, it’s contemporary. You can get immersed in World of Warcraft or Second Life and become a sword-swing, spell-packing nerd of Azaroth today.

I’ve played around in Second Life a bit in the past (reporting as “Earnest Oh”), so I can appreciate the appeal of being able to put on that second skin and walk around (or remove it and assume the position, in a lot of people’s cases… yeesh, people). It makes you wonder about the boundary between fantasy and reality for one thing. I read somewhere, that internally, your brain does not distinguish the difference between a memory of a real event, and a memory of a dream. They’re both equally valid as memories, even if one of them didn’t occur in the physical world. So, if our brains are already wired to accept dream-memories as valid, why wouldn’t we send coma victims to Azaroth to kick some goblin ass as part of some cognitive stimulation therapy? At least they’d have something interesting to do.

What about The Matrix as Long Term Care Facility? Let me extend that interesting idea into my personal life experience…

My Mother was a long-term care resident at our provincial mental health hospital for many years. I’m willing to bet that if my poor Mum were able to choose between (A) stay in a semi-vegetative state with little physical activity and not much on TV, or (B) Be Dorothy in the Wizard of Oz (her favourite movie), she’d have gone for Option B and never looked back. And if I could have visited her on the yellow brick road instead of in the awkward, cold silence of a hospital visiting room, I know which choice I’d have made too.

Dorothy and her friends had much more fun…

E-learning and Digital Cultures, Week 3: Reasserting the Human #edcmooc

Week 3 of the MOOC E-Learning + Digital Cultures explores the theme of “Reasserting the Human”.

In the videos I’ve seen so far in Week 3, the idea of humanity is brought to the foreground primarily by the absurd or hyper-extended context in which each story is framed.

As a metaphor for what I mean,  imagine you place a small area of light grey colour on top of a large black background. On black, the light grey will look much lighter than it actually is. In fact, people might interpreted it to be white.

That’s what these videos appear to be doing: creating a non-human, artificial or alien (spoiler alert!) tone or context, which brings out our internal concept of humanity in sharp relief. Unfortunately, they also bring out my cynicism in even sharper relief.

This somewhat shallow Toyota ad riffs on the idea of what today’s viewer would consider “CG” – a 3D representation that approaches the level of an interactive 3D video game, such as “L.A. Noire”. The message is insultingly simplistic: “Toyota is the real deal” [*snore*]

What I find more interesting is the fact that most younger viewers will totally be able to agree as to the “unrealistic” 3D graphics in this commercial. They’ve grown up in the era of HD and awesome frame rates.

I was born in 1966, and I suspect that my generation will be less likely to find as much fault with the quality of the “unrealistic” renderings. Maybe my generation would pass on the real deal Toyota, and drive our chunky, pixely KIAs or Yugos around in 3D land and still have a great time. I guess Toyota is pandering to the 25 year-old driver in this case, and I’m somewhat irrelevant.

This British Telecom ad makes the point about human contact by showing a family that interacts exclusively via text and social media. They don’t even seem to know how disconnected they truly are from each other, until BT points it out of course. Poor buggers.

This ad is basically “reach out and touch someone” all over again. (Does anyone remember that ad campaign from the 1970s?) Poor consumers. At least this commercial has a message promoting some kind of “more human” connection to it. The idea is that real-time voice comunication – the good ol’ phone system – is more human than texting or social media. I tend to agree with this sentiment, although ironically, I’d be using my social media as much as anyone, for the sheer convenience.

Most of the telecom commercials I’ve seen portray families that seem to need infinite minutes, massive data plans and constant texting. It always shows family members enjoying their digital lives, away from each other in separate rooms, not conversing or connecting or even acknowledging each other.

World Builder is a bittersweet fantasy. My initial response during the first few minutes was “Self satisfied 3D modeler plays God creating his perfect little 3D world = adolescent male power fantasy = So what?”

But as the story unfolded to it’s final conclusion, it revealed a sweet moral of self-sacrifice, a dream-wish of happiness and freedom given from someone who has freedom to someone who hasn’t any.

The idea here is that technology can be a tool to humanize and liberate, and in this video, liberation and freedom are placed in the service of love and compassion, instead of in the selfish pursuit of pleasure or power.

The Long Hello – Meditating on #edcmooc, Gardner Campbell, and eLearning

The MOOC I’m taking, E-Learning + Digital Cultures, continues to unfold in front of me, gradually showing me new perspectives and more detail. But it’s not for the impatient…

For me, being in a MOOC has felt like being seated inside  a vast, unlit stadium where you can hear other attendees whispering and you can see their messages on the walls, but otherwise, they remain invisible. Getting acclimatized – even feeling welcome – does not come right away.

A few weeks later, this is still more or less my experience, but my eyes seem to have adjusted to the darkness now – I feel like I can see better and interpret more than before.

Gardner Campbell’s Open Ed 2012 keynote address hit me like a bolt to the brain… [It] made me feel inspired and energized to explore my own spaces between art, technology and learning.

In the Week 2 resources, under “Perspectives on Education”, the video of Gardner Campbell’s Open Ed 2012 keynote address hit me like a bolt to the brain: his passionate advocacy for truly open learning, his challenging definitions of what he felt it should be, and his support and appreciation for the interdisciplinary responses of his students – all of these factors made me feel inspired and energized to explore my own spaces between art, technology and learning. I think I may have found a new inspiration – someone to study more closely.

When I was in the Emily Carr College of Art + Design in the eighties, I learned about media theory (e.g. MacLuhan), multimedia and hypertext (e.g. Ted Nelson), and visual literacy and visual perception (e.g. Tom Hudson, Rudolph Arnheim, Johannes Itten). Some things I learned from reading books or watching videos, but a lot of information I got first-hand, from seminars, workshops and special research projects. The people I learned from in-person were all artist-educators who were actively exploring ideas through their own art practice or educational research, often using consumer tech on shoestring budgets.

Back in my days as a multidisciplinary art student and research assistant, my greatest personal challenge was to interpret and  synthesize all the raw information, and later, decide how to express my experiences. Many of my extracurricular readings covered topics in AI, cybernetics, user interaction, and theories of learning and education. I was all over the place conceptually, and loved it. Science educators like Seymour Papert and Alan Kay caught my interest for their explorations with interfaces and user (student) interaction. I read about the MIT Media lab, and all its explorations into media, technology, art and science. I read articles from the ISAST Journal “Leonardo”, and learned about PhD-level multidisciplinary art and science research projects. A good deal of the theories and terminology was just over my head, but I had found an interesting, fertile territory to consider, in the intersections of art, education and technology. Convergence was just starting to happen, and it was a fascinating thing.

My multimedia instructor, artist Gary Lee-Nova, helped me understand the relationships between modern analog and digital media, perception and society. Gary talked about author William Gibson and the idea of cyberpunk way before it was popular. Research, exploration and personal development were fun back then.

My mentor back in art college, Dr. Tom Hudson, opened my mind to modernist Bauhaus art education patterns, and under his guidance, we updated and reinterpreted them by using desktop computer graphics programs to research visual literacy and drawing systems.

After graduating from Emily Carr’s four year diploma program in 1989, I opted to pursue computer graphics, animation or commercial design as my career path, instead of art education. Tom had, at some level, hoped I would continue pursuing art education as a career. I did teach computer graphics in night school for a few years, tutored art privately, and was an Artist-in-Residence in the Vancouver School Board, but I never went into education in a more formalized way, like by pursuing a degree.

After 20 years working in the commercial sector, bringing visual design services to software/hardware developers and business people, the exciting theoretical, creative aspects of my thinking felt as is they had atrophied and needed some dusting off. My Modus Operandi had become one of speed and economy: skimming the surface of the pond of ideas to get from questions to answers, and from initial request to practical deliverable, as quickly as possible. Any education I took from my graphics career was of a short-term, tactical nature. I learned what I needed in order to fulfill a particular short-term goal. In that kind of mode, there wasn’t much time or interest in theory.

Now, I’m employed in Vancouver’s largest vocational college, helping teachers to adapt their experience and materials into online courses. In a higher education institution, my perceptions and reactions have had to adjust to a more deliberate, thoughtful form of delivery: integrity over speed, and quality over quantity.

Now, it feels like I’m rediscovering the joy of the interconnectedness of ideas – a multidisciplinary approach to things. I’m fascinated to see some of the topical connections between Seymour Papert, Alan Kay and Gardner Campbell.

I can, and should, now enjoy taking a deep dive into topics, instead of just skimming the surface.

It’s a long hello, but worth the wait…

#edcmooc