Not long ago, I revisited an old idea with a friend at work: The Exquisite Corpse drawing game. We wanted to use it as a way to encourage some asynchronous play-activity among members of our busy and dispersed work group, to share or generate ideas, and to maybe generate some humour and surprise by chance.
The Exquisite Corpse game developed originally as a writing activity where participants contributed successive lines to a hidden story and revealed the full results later. The name “exquisite corpse” came from one contributed sentence from one particular game. It evolved into a drawing game where players would add sections to the end of each other’s drawings, without seeing the previous contributor’s work.
Surrealism evolved out of the Dadaist movement after World War 1. It was originally literary, expressed in poetry, prose, and sometimes through an experimental activity called automatic writing (and as I recall, another term for this may have been “psychic automatism”). The Surrealist movement was driven by poets and writers like Andre Breton, painters like Freida Khalo and Salvadore Dali, and photography and film artists like Man Ray.
Surrealists were deep explorers of internal landscapes and of the meanings that emerged from the juxtaposition of seemingly unrelated symbols. They were interested in exploring subconscious imagery; the themes and symbols that lay beneath the conscious mind, such as dreams, non-verbal desires, or primal urges.
Personally, I’ve found a lot of satisfaction in using collage of magazine and newspaper imagery to create unexpected images. Whereas in the exquisite corpse game where each participant hides their contribution from the next player, a solitary collage doesn’t involve other people, but still provides unknown directions or unexpected ideas from moment to moment, based on a somewhat-random selection of visual elements, and any haphazardness or chances taken in how images get cut up, torn, and recontextualized.
I usually start with a large plastic bin full of magazines and scraps from coverless comics or newspapers. I just reach in and pull out as much as I think will cover the sheet of paper in front of me. Sometimes a few scraps will be so visually strong that they’ll drive an idea to be formed around them. Other times, a theme only emerges after a few pieces have been placed to set some scene (like sky, ground, trees, or buildings). I may use tape to tack things down, and use glue or rubber cement once a piece has remained in place for a while, as I build and develop things around it.
My themes almost always involve the human figure or some almost iconic form, and emerge from my internal themes of power, helplessness, mother/father, joy, ego and pride, fear of the unknown, virtuous ideals, sexuality, or pain.
For me, it works like a kind of visual self-talk therapy; a way to build a personal mirror, and to explore what stares back at you.
This little video is my response to the Creative Task for Chapter 4 (“Inspirational Week”)
The proposition for the task was this:
Take a camera, be it you mobile phone, a webcam… Introduce yourself to the other StoryMOOCers, telling us who you are where you are from and most importantly: which works inspired your interest in storytelling most up to know. Pick out 1-3 works of art, literature, film, TV, game, a website or else and tell us what’s so special about it that you think it might help inspire somebody else anywhere on this planet.
This Massive Open Online Course provides a foundation in the principles of the formats and methods of fictional storytelling.
The reasons this online course attracted me are:
The topic interests me: I’m beginning to write again, and I want to learn more…
The method of access interests me: I work in eLearning, and using a new Learning Management System is fun and educational in itself.
It’s largely self-paced, and absolutely free.
The course is organized into Chapters, each containing a number of Units of instruction. The format of each Unit is the same: each unit contains one brief video presentation (usually 10-12 minutes in length) where the host introduces the Unit topic, and provides examples, animation, or brief explanations from famous works of fiction or professional writers or storytellers.
Adjacent to each video is a tiny, one or two question quiz (often multiple-choice) which you must answer correctly to “pass” the Unit.
Below the video and quiz are links to optional further readings, references to articles or books, or other supporting videos. It doesn’t get much easier than that. I think this course is a bit too easy so far, but it is also very well-designed, nice to look at, and easy to use. The videos are extremely professionally-made and fun to watch. So far, the course has been a very enjoyable experience.
Apparently, this course has over 65,000 enrollees from all over the world, and (with the exception of a technical problem in Unit 2 of Chapter 1) seems to be well-liked by its users.
My only concern is the “apparent” level of interactions online in the course’s discussion forums. I say “apparent” because in my opinion, the discussion forums in the iVersity MOOC platform don’t really seem to adequately show the amount of interaction between students, and I don’t get an obvious sense that the Instructors are online and available.
This may be unfair of me, as I admit that I haven’t spent much time in the forums for this course, but in my memory of taking a different MOOC hosted in Coursera (“eLearning and Digital Cultures”), the Instructors seemed to have a more obvious presence online in the course’s discussion boards, and in monthly Google Hangout sessions.
Yeah – I’m going through a DEVO phase again. I listen to their music all the time. Their voices and sounds are familiar, like visiting an old neighbourhood.
I get emails from Club Devo,and see snippets of mutated art from Mark M., photos from their irreverent, young new wave days, and so many artifacts of their gleeful, tongue-in-cheek self-promotion. Echoes of the back-of-the-comic ad, junk culture that they enjoy.
Every 6 – 12 months, something bigger than my playlist brings the Devoids to my mind in a more significant way. Something new bubbles up in the media. This time, perhaps it was the unfortunate death of their friend and long-time drummer, Alan. A very sad loss, indeed. Their own “human metronome”, the driver of their complicated, syncopated rhythms, was no more.
Gerry Casale started following my Twitter feed the other day, and it made me feel a little closer to the source. The more DEVO videos or interviews I watch, the more I read, the more they’re like citizens of some weird hometown – the guys who struck out a few years before my generation, and who did all the cool art that I wish I’d done.
I love this passage from the book “We are DEVO!” by Jade Dellinger and David Giffels):
In his book Fargo Rock City, rock critic Chuck Klosterman wrote that “Listening to (Eric) Clapton was like getting a sensual massage from a woman you’ve loved for the past ten years; listening to Van Halen was like having the best sex of your life with three foxy nursing students you met at a Tastee Freeze.” To extend that metaphor, Devo would be the equivalent of auto-erotic asphixiation, the sexual technique of partly hanging oneself during masturbation to achieve a more intense orgasm.”
(Having been to one uninspiring Clapton concert, I think that Klosterman likes Clapton a bit too much.)
Yeah, so DEVO is an acquired taste – not the flavour (or party favour) of the week.
But, yeah spuds, challenge me please. Make me think, or make me argue. If you can get me to write or think about what you’re saying, well, you’ve found that devolved nerve ending and twanged it nicely. And I thank you.
Clapton and all the 2nd-wave Brit rock gods, as incredibly talented as they were musically, never made me think about a damned thing. But the DEVO experiment got my attention, and they’re still doing it.
Robbie tells the story of a space-faring android who is the last occupant of a space station orbiting the earth. I could easily tell that this film was composed entirely of stock footage, but then again, how easy would it be to shoot your movie on the space station (or a realistic, earth-bound mockup). Nonetheless, the repetitive, stock footage appearance of it put me off a bit. Aside from that, Robbie is an engaging tale about survival, loneliness and angst from the perspective of an artificial intelligence.
I don’t know if 4000 or 6000 years of feeding its neural net with information would result in an android that would have dreams – literally flights of fantasy – and not for one moment did I buy the premise that Robbie wanted to be Catholic.
I’ll say that again: Catholic. I’m not anti-Catholic or anything, but such a specific choice of religion seems out-of-place. Is the author of this piece likening Robbie the Robot to Jesus, by virtue of his symbolic impending death (and do we presume, rebirth?)
My expectation of an autonomous, artificial intelligence would be that it would be somehow more neutral, probably atheist or maybe humanist. It either wouldn’t believe a religion or perhaps it would believe in the species which created it. Okay – so, I’m an atheist and I have a hard time with that aspect. I’ll leave that point alone, and get on with it.
No – I just cannot leave the religion aspect alone on this one…
The idea that a robot with what we consider to be A.I. would care about one religion over another probably says more about the film maker’s attempt to imbue his protagonist with some kind of “soul”, so that the viewer will empathize with him. “If the robot wants to believe in God, then he must be more like me than I thought. If he could consider accepting God as his creator, then he must have a higher level of enlightenment, just like a human.”
If, however, Robbie were to possess the actual mental engrams of a former human being – if a human being’s actual thoughts and personality could be transferred into Robbie’s memory and mechanical frame – then THAT would convince me to feel sympathy for Robbie’s plight (his curse of immortality).
But so long as I believe that Robbie possesses a 21st century version of artificial rationale, I can never consider him conscious, and so I will never accept him for much more than a glorified electric screwdriver left behind by a space workman. How cold-hearted am I? I just didn’t buy into this movie’s attempt to tug my heart strings.
Gumdrop was a sweet little comedy, and a gentle visual sleight-of-hand. By substituting a young human actor with an android auditioning for an acting job, we end up starting to think about the values and hopes of the young actress, mechanical or not. Gumdrop was a light-hearted examination of the casting call too: do we treat each other like commodities or machines? Does the audition process demean the female actor? Should human actors be worried, now that we live in a world where lots of supporting and lead characters only exist in an animation database, but never in the physical sense?
Gumdrop’s vacuum cleaner gag was very funny. But, does that mean she’s really just a glorified Rosy the Robot? What happens when the acting career is finished, or when she outlives her warranty? Will she get literally dumped on the scrap heap?
For some reason, I care about Gumdrop more than Robbie. Maybe it’s the human motion and voice. She’s much more likeable than Robbie. Like they said in Pulp Fiction, personality goes a long way.
True Skin is an extremely well-made, and convincing film. Very Blade Runner-esque. Great Raymond-Chandler-inspired dialogue. “Their eyes burned holes in my pockets” was a brilliant line.
So, the one thing all these films have in common is that they live or die by the quality of the plot and the dialogue. Yay, human writers!
In terms of the humanity proposition of this week, I think this film does the best job of articulating some major issues:
If there comes a time when we can no longer define or recognize humanity by its fleshiness, will it still be considered human? Is a cyborg who is less than 50% flesh and bone still a human being? Maybe the more metallic and less meaty we become, the less human we will be perceived to be. Ben Kenobi said of Darth Vader: “He’s more machine than man now, twisted and evil.”
On a personal level, if a friend of mine had their thoughts transferred into a little computer, and I could interact with them (either text, or maybe Max Headroon style on a display screen), would I still consider them human? Probably not, if I could put them into Standby Mode, or turn them off, like any other device. So, maybe autonomy and self-preservation are other key aspects of being a sentient being?
I loved Avatar Days. The simple concept of transplanting a fantasy persona into the owner’s real-world life and society is an extremely powerful thing. It’s done so matter-of-factly and carefully that it becomes a real artistic social statement. Coolest of all, it’s contemporary. You can get immersed in World of Warcraft or Second Life and become a sword-swing, spell-packing nerd of Azaroth today.
I’ve played around in Second Life a bit in the past (reporting as “Earnest Oh”), so I can appreciate the appeal of being able to put on that second skin and walk around (or remove it and assume the position, in a lot of people’s cases… yeesh, people). It makes you wonder about the boundary between fantasy and reality for one thing. I read somewhere, that internally, your brain does not distinguish the difference between a memory of a real event, and a memory of a dream. They’re both equally valid as memories, even if one of them didn’t occur in the physical world. So, if our brains are already wired to accept dream-memories as valid, why wouldn’t we send coma victims to Azaroth to kick some goblin ass as part of some cognitive stimulation therapy? At least they’d have something interesting to do.
What about The Matrix as Long Term Care Facility? Let me extend that interesting idea into my personal life experience…
My Mother was a long-term care resident at our provincial mental health hospital for many years. I’m willing to bet that if my poor Mum were able to choose between (A) stay in a semi-vegetative state with little physical activity and not much on TV, or (B) Be Dorothy in the Wizard of Oz (her favourite movie), she’d have gone for Option B and never looked back. And if I could have visited her on the yellow brick road instead of in the awkward, cold silence of a hospital visiting room, I know which choice I’d have made too.
The MOOC I’m taking, E-Learning + Digital Cultures, continues to unfold in front of me, gradually showing me new perspectives and more detail. But it’s not for the impatient…
For me, being in a MOOC has felt like being seated inside a vast, unlit stadium where you can hear other attendees whispering and you can see their messages on the walls, but otherwise, they remain invisible. Getting acclimatized – even feeling welcome – does not come right away.
A few weeks later, this is still more or less my experience, but my eyes seem to have adjusted to the darkness now – I feel like I can see better and interpret more than before.
Gardner Campbell’s Open Ed 2012 keynote address hit me like a bolt to the brain… [It] made me feel inspired and energized to explore my own spaces between art, technology and learning.
In the Week 2 resources, under “Perspectives on Education”, the video of Gardner Campbell’s Open Ed 2012 keynote address hit me like a bolt to the brain: his passionate advocacy for truly open learning, his challenging definitions of what he felt it should be, and his support and appreciation for the interdisciplinary responses of his students – all of these factors made me feel inspired and energized to explore my own spaces between art, technology and learning. I think I may have found a new inspiration – someone to study more closely.
When I was in the Emily Carr College of Art + Design in the eighties, I learned about media theory (e.g. MacLuhan), multimedia and hypertext (e.g. Ted Nelson), and visual literacy and visual perception (e.g. Tom Hudson, Rudolph Arnheim, Johannes Itten). Some things I learned from reading books or watching videos, but a lot of information I got first-hand, from seminars, workshops and special research projects. The people I learned from in-person were all artist-educators who were actively exploring ideas through their own art practice or educational research, often using consumer tech on shoestring budgets.
Back in my days as a multidisciplinary art student and research assistant, my greatest personal challenge was to interpret and synthesize all the raw information, and later, decide how to express my experiences. Many of my extracurricular readings covered topics in AI, cybernetics, user interaction, and theories of learning and education. I was all over the place conceptually, and loved it. Science educators like Seymour Papert and Alan Kay caught my interest for their explorations with interfaces and user (student) interaction. I read about the MIT Media lab, and all its explorations into media, technology, art and science. I read articles from the ISAST Journal “Leonardo”, and learned about PhD-level multidisciplinary art and science research projects. A good deal of the theories and terminology was just over my head, but I had found an interesting, fertile territory to consider, in the intersections of art, education and technology. Convergence was just starting to happen, and it was a fascinating thing.
My multimedia instructor, artist Gary Lee-Nova, helped me understand the relationships between modern analog and digital media, perception and society. Gary talked about author William Gibson and the idea of cyberpunk way before it was popular. Research, exploration and personal development were fun back then.
My mentor back in art college, Dr. Tom Hudson, opened my mind to modernist Bauhaus art education patterns, and under his guidance, we updated and reinterpreted them by using desktop computer graphics programs to research visual literacy and drawing systems.
After graduating from Emily Carr’s four year diploma program in 1989, I opted to pursue computer graphics, animation or commercial design as my career path, instead of art education. Tom had, at some level, hoped I would continue pursuing art education as a career. I did teach computer graphics in night school for a few years, tutored art privately, and was an Artist-in-Residence in the Vancouver School Board, but I never went into education in a more formalized way, like by pursuing a degree.
After 20 years working in the commercial sector, bringing visual design services to software/hardware developers and business people, the exciting theoretical, creative aspects of my thinking felt as is they had atrophied and needed some dusting off. My Modus Operandi had become one of speed and economy: skimming the surface of the pond of ideas to get from questions to answers, and from initial request to practical deliverable, as quickly as possible. Any education I took from my graphics career was of a short-term, tactical nature. I learned what I needed in order to fulfill a particular short-term goal. In that kind of mode, there wasn’t much time or interest in theory.
Now, I’m employed in Vancouver’s largest vocational college, helping teachers to adapt their experience and materials into online courses. In a higher education institution, my perceptions and reactions have had to adjust to a more deliberate, thoughtful form of delivery: integrity over speed, and quality over quantity.
Now, it feels like I’m rediscovering the joy of the interconnectedness of ideas – a multidisciplinary approach to things. I’m fascinated to see some of the topical connections between Seymour Papert, Alan Kay and Gardner Campbell.
I can, and should, now enjoy taking a deep dive into topics, instead of just skimming the surface.
Last year, I read an astute saying that said “If you didn’t pay to use a service, then you are the product being sold”. I feel like that kind of “buyer beware” maxim could be applied to ease-of-use in information technologies too. Here’s what I mean…
If a technology tool or platform is popular, we could say that, in part, because it’s easier to use than the competition, the usability aspect of its design was likely a core business strategy. Hardware designers might talk of “build quality” and ergonomics – it’s all about usability.
Today, usability is deeply integrated into product design and marketing. For example, let’s take the rise of tablet computing platforms – most popularly, the Apple iPad. Many users who are new, or technologically-intimidated, or very young or old, will likely have an easier time using a touch-tablet like the iPad than they would using a desktop computer. Compared to the user experience of manipulating a mouse and keyboard on a desk to manipulate objects on a screen, touching your finger to a screen on a tablet (primarily one that has an OS that is designed for touch use) is much easier for a new or unfamiliar user. You don’t have to “get used” to using a mouse (i.e. training yourself that a wrist movement of a few inches from left to right across your desk will translate into a one-foot left-to-right motion of a pointer on the screen in front of your face). This basic aspect of the windows-mouse-icon-pointer interface is actually a barrier to use: a new user must practice a little bit before they can easily manipulate graphical objects using a mouse.
In this regard, smartphone and tablet-based computing have been absolute game-changer technologies for many people. Apple and many other manufacturers knew this, and were waiting for touch-screen technology to become sophisticated and inexpensive enough to bring to the mass market.
These devices are used to access many free and for-pay information and media services. People don’t really think about the way it is – they just want to be able to use these devices – these new gadgets – to get at the news, music, movies, or games that they want. Corporations seem to have taken a cue from the original “information on the Internet should be free” ethos that evolved through the 70s, 80s and 90s, and subverted it by making books, apps and games available on tablets for only a few dollars, or even for free. Buying an iPad game that will give you dozens of hours of fun will cost you about the same as a pack of bubble gum. That’s one barrier gone. After you download it, you can use it right away – installation is usually fast and minimal. That’s another barrier gone.
From a business perspective, making a platform easier to use (usability), and making the purchase process easier to complete (one-click fulfillment) and easier to justify (cheap or free) will easily result in more purchases. Amazon’s “One-click” purchase button was the first place I saw this kind of supermarket checkout “impulse purchase” tactic at work. I had disposable income, and Jeff Bezos and Amazon made it extremely easy for me to dispose of it on a whim. I could “impulse buy” a thirty dollar hardcover book with even less effort than it would take to grab a candy bar at the checkout aisle at Safeway. Tablets with apps and books that can be bought for under a dollar, while you’re laying in bed at night, are about as convenient and impulsive as it gets.
It means that the end-user consumer must exercise some discretion and will power to avoid nickel and diming themselves down to a negative balance in their bank account. A high degree of usability in the device itself makes for a pleasing and satisfying user experience, and ubiquitous cheap online products in a “one-click marketplace make it deceptively easy to please the vendors.
So, if it’s too easy to use, be careful. You might use it too often.
This site is an experiment. It’s my attempt to document the wide array of personal interests, curiosities, and self-directed learning efforts which continually seem to occupy my off-hours.
My interests tend to vary – I tend to hop around a lot conceptually, in terms of what motivates or excites me.
I go through phases; minor obsessions with very different topics or areas of interest. Like the avante-garde pop of Devo, or the social commentary of Popeye and Groucho Marx, or the design philosophies of the Bauhaus, or Einstein’s Relativity. I have always tended to hop laterally from subject to subject, and then try to integrate and assimilate that new information into what I already know.
I’m mostly a visual learner. I need to see and make images to help me understand. So, aside from the chronological, bloggy aspect of this site, I thought that it would good to have an image portfolio to show any research that I do, or to show illustrations that helped me to get to where I wanted to go.
Somewhere there are connections – common threads – between all these various areas of interest. Finding those threads and tugging on them is part of the joy of discovery.
I am a life-long learner, and probably, a perpetual student.
All through my post-secondary education (four frantic, sleep-deprived, incredible years at art college), I seldom knew exactly what I wanted to do in art and design. I just knew what ideas excited me.
In the summer of 1985, once I learned that I was accepted the the Emily Carr College of Art and Design (after I peeled myself off the ceiling), I started to do a few things.
First I panicked, thinking “Gawd – can I do this?” I got over that phase.
Next, I began to imagine what it would be like to be an art student. Unfortunately, nothing but stereotypical images of painting and drawing came to my mind.
Finally, I realized that I needed to prepare myself in a few ways. I needed to assemble my portfolio and I needed to develop a little confidence, so I took a life drawing course at a small studio on Granville Island. I blushed self-consciously while trying to avoid the eyes of the nude model. I scribbled, muttered to myself, and produced a bunch of weak and tentative scribbles that I probably threw out later. As I was packing up to leave, I looked to the model as she was reaching for her robe, and she shot me a smile and a knowing look that both reassured me and told me that she knew just how green I was. I laughed on the inside, and walked home feeling some pride in having tried in my first life drawing class. I proudly announced to my Dad that I had done my first life drawing class. Once Dad realized that “life drawing” involved a nude model, he became very angry, growling “Why can’t you just draw fruit?!” Screw him, I thought. I was proud of myself. It wouldn’t be long before Dad felt proud too. That was pretty cool.
Fortunately, I passed my portfolio interview (and I still don’t know how I got through), and began Foundation (first year) studies at Emily Carr.
One of the first places where things really clicked for me was in Foundation Computer class. Even though it was 1985, and we were using Commodore 64s (and in one class, I swear to god I had a Vic-20 with a datasette), I became fascinated by those little machines that were capable of turning key-presses into little glowing blocks of colour and shape. I remember trying to memorize MS Basic character string functions like “Chr$(32)”, and trying to understand how BASIC worked. A year later, the college bought dozens of Macs, Amigas and Atari ST PCs, and we all began using mice and creating real computer-based graphics and animation.
I also began to consider the schism within myself: artistic and instinctual on the one side (my Mother), and structural and technical on the other side (my Father). Early on, I did not know how to reconcile these two aspects of my personality, but I knew that they would co-exist, and eventually, I developed the idea that they would interact or influence each other in some way.
In the following years, I developed a keen interest in multimedia, animation and video, and began to learn how these technologies were gradually converging (read Stewart Brand’s book “The Media Lab”). I absorbed as much media theory as my instructor Gary Lee Nova provided, got technical help designing simple electronic circuits from Dennis Vance, and studied on my own a lot (relationships between art, science and technology, cybernetics).
More than any other teacher I’ve had, Dr. Tom Hudson was a massive influence on me throughout my art student years. Under Tom’s tutelage and inspiration, I learned about visual literacy, and undertook experiments in colour and drawing in the Bauhaus and British post-war traditions. The main difference was that all my “vis-lit” research for Tom was executed on a microcomputer, using a commercial paint program. We were actually exploring and developing work in computer-based visual literacy. This extracurricular research work was used in Tom’s educational television series “Mark and Image”, and also published in two of his academic articles for the British Journal of Art and Design Education. These events remain my academic high-water marks, and form the springboard of my interest and development as a digital designer.
By a couple of years after graduation, I was developing icons, layouts and animations for the user interface of what was to become North America’s first home-based banking system. From there, my interest in GUI design and web design was born. Since that time, I’ve enjoyed working with software designers on GUI design projects for TV, game consoles, PC and web-based applications. The essentials of visual literacy, colour, design, perception, and user expectations have all been developed and refined through those practical, real-world design projects.
Now, 21 years after graduating from the ECCAD four year program and receiving my diploma in fine arts, I look at the preponderance of digital media and information systems in the world around me, and I’m amazed at how much that culture and technology have converged, and have even seemed to become practically inseparable.
I think that good digital design is more important than ever, and being able to work in multiple media, multiple formats and multiple modes of thought (artistic, technical, exploratory, practical) seems to me to be more important than ever.
Explorations in learning, ideas, and design by E. John Love