This little video is my response to the Creative Task for Chapter 4 (“Inspirational Week”)
The proposition for the task was this:
Take a camera, be it you mobile phone, a webcam… Introduce yourself to the other StoryMOOCers, telling us who you are where you are from and most importantly: which works inspired your interest in storytelling most up to know. Pick out 1-3 works of art, literature, film, TV, game, a website or else and tell us what’s so special about it that you think it might help inspire somebody else anywhere on this planet.
The Eastside Culture Crawl is East Vancouver’s own open studio tour. I don’t go every year, but have gone for many years. It feels good to wander through studio space, smelling paint, sawdust, and sometimes coffee, tea, and cookies.
Wandering through a painting studio always gives me a sense of wonder, like I’m exploring a mysterious territory. It’s so refreshing to not know what you’ll see next.
Back in the 80s, as an art student, I studied some drawing and a lot of computer graphics, which was just starting to evolve into a useful medium through relatively inexpensive home computer systems. With the exception of a little ink or graphite, my hands stayed relatively clean while I drew using a mouse.
So occasionally, my curiosity would lead me to the painting studios at my art college, where I could experience colour as it was embedded in thick pigment. I could see the physicality of its application, smell the oil and acrylics, and see the splatters and splashes of physical action. Computer graphics had – and have – none of that physicality or real-space depth and reality.
Yeah – I’m going through a DEVO phase again. I listen to their music all the time. Their voices and sounds are familiar, like visiting an old neighbourhood.
I get emails from Club Devo,and see snippets of mutated art from Mark M., photos from their irreverent, young new wave days, and so many artifacts of their gleeful, tongue-in-cheek self-promotion. Echoes of the back-of-the-comic ad, junk culture that they enjoy.
Every 6 – 12 months, something bigger than my playlist brings the Devoids to my mind in a more significant way. Something new bubbles up in the media. This time, perhaps it was the unfortunate death of their friend and long-time drummer, Alan. A very sad loss, indeed. Their own “human metronome”, the driver of their complicated, syncopated rhythms, was no more.
Gerry Casale started following my Twitter feed the other day, and it made me feel a little closer to the source. The more DEVO videos or interviews I watch, the more I read, the more they’re like citizens of some weird hometown – the guys who struck out a few years before my generation, and who did all the cool art that I wish I’d done.
I love this passage from the book “We are DEVO!” by Jade Dellinger and David Giffels):
In his book Fargo Rock City, rock critic Chuck Klosterman wrote that “Listening to (Eric) Clapton was like getting a sensual massage from a woman you’ve loved for the past ten years; listening to Van Halen was like having the best sex of your life with three foxy nursing students you met at a Tastee Freeze.” To extend that metaphor, Devo would be the equivalent of auto-erotic asphixiation, the sexual technique of partly hanging oneself during masturbation to achieve a more intense orgasm.”
(Having been to one uninspiring Clapton concert, I think that Klosterman likes Clapton a bit too much.)
Yeah, so DEVO is an acquired taste – not the flavour (or party favour) of the week.
But, yeah spuds, challenge me please. Make me think, or make me argue. If you can get me to write or think about what you’re saying, well, you’ve found that devolved nerve ending and twanged it nicely. And I thank you.
Clapton and all the 2nd-wave Brit rock gods, as incredibly talented as they were musically, never made me think about a damned thing. But the DEVO experiment got my attention, and they’re still doing it.
Robbie tells the story of a space-faring android who is the last occupant of a space station orbiting the earth. I could easily tell that this film was composed entirely of stock footage, but then again, how easy would it be to shoot your movie on the space station (or a realistic, earth-bound mockup). Nonetheless, the repetitive, stock footage appearance of it put me off a bit. Aside from that, Robbie is an engaging tale about survival, loneliness and angst from the perspective of an artificial intelligence.
I don’t know if 4000 or 6000 years of feeding its neural net with information would result in an android that would have dreams – literally flights of fantasy – and not for one moment did I buy the premise that Robbie wanted to be Catholic.
I’ll say that again: Catholic. I’m not anti-Catholic or anything, but such a specific choice of religion seems out-of-place. Is the author of this piece likening Robbie the Robot to Jesus, by virtue of his symbolic impending death (and do we presume, rebirth?)
My expectation of an autonomous, artificial intelligence would be that it would be somehow more neutral, probably atheist or maybe humanist. It either wouldn’t believe a religion or perhaps it would believe in the species which created it. Okay – so, I’m an atheist and I have a hard time with that aspect. I’ll leave that point alone, and get on with it.
No – I just cannot leave the religion aspect alone on this one…
The idea that a robot with what we consider to be A.I. would care about one religion over another probably says more about the film maker’s attempt to imbue his protagonist with some kind of “soul”, so that the viewer will empathize with him. “If the robot wants to believe in God, then he must be more like me than I thought. If he could consider accepting God as his creator, then he must have a higher level of enlightenment, just like a human.”
If, however, Robbie were to possess the actual mental engrams of a former human being – if a human being’s actual thoughts and personality could be transferred into Robbie’s memory and mechanical frame – then THAT would convince me to feel sympathy for Robbie’s plight (his curse of immortality).
But so long as I believe that Robbie possesses a 21st century version of artificial rationale, I can never consider him conscious, and so I will never accept him for much more than a glorified electric screwdriver left behind by a space workman. How cold-hearted am I? I just didn’t buy into this movie’s attempt to tug my heart strings.
Gumdrop was a sweet little comedy, and a gentle visual sleight-of-hand. By substituting a young human actor with an android auditioning for an acting job, we end up starting to think about the values and hopes of the young actress, mechanical or not. Gumdrop was a light-hearted examination of the casting call too: do we treat each other like commodities or machines? Does the audition process demean the female actor? Should human actors be worried, now that we live in a world where lots of supporting and lead characters only exist in an animation database, but never in the physical sense?
Gumdrop’s vacuum cleaner gag was very funny. But, does that mean she’s really just a glorified Rosy the Robot? What happens when the acting career is finished, or when she outlives her warranty? Will she get literally dumped on the scrap heap?
For some reason, I care about Gumdrop more than Robbie. Maybe it’s the human motion and voice. She’s much more likeable than Robbie. Like they said in Pulp Fiction, personality goes a long way.
True Skin is an extremely well-made, and convincing film. Very Blade Runner-esque. Great Raymond-Chandler-inspired dialogue. “Their eyes burned holes in my pockets” was a brilliant line.
So, the one thing all these films have in common is that they live or die by the quality of the plot and the dialogue. Yay, human writers!
In terms of the humanity proposition of this week, I think this film does the best job of articulating some major issues:
If there comes a time when we can no longer define or recognize humanity by its fleshiness, will it still be considered human? Is a cyborg who is less than 50% flesh and bone still a human being? Maybe the more metallic and less meaty we become, the less human we will be perceived to be. Ben Kenobi said of Darth Vader: “He’s more machine than man now, twisted and evil.”
On a personal level, if a friend of mine had their thoughts transferred into a little computer, and I could interact with them (either text, or maybe Max Headroon style on a display screen), would I still consider them human? Probably not, if I could put them into Standby Mode, or turn them off, like any other device. So, maybe autonomy and self-preservation are other key aspects of being a sentient being?
I loved Avatar Days. The simple concept of transplanting a fantasy persona into the owner’s real-world life and society is an extremely powerful thing. It’s done so matter-of-factly and carefully that it becomes a real artistic social statement. Coolest of all, it’s contemporary. You can get immersed in World of Warcraft or Second Life and become a sword-swing, spell-packing nerd of Azaroth today.
I’ve played around in Second Life a bit in the past (reporting as “Earnest Oh”), so I can appreciate the appeal of being able to put on that second skin and walk around (or remove it and assume the position, in a lot of people’s cases… yeesh, people). It makes you wonder about the boundary between fantasy and reality for one thing. I read somewhere, that internally, your brain does not distinguish the difference between a memory of a real event, and a memory of a dream. They’re both equally valid as memories, even if one of them didn’t occur in the physical world. So, if our brains are already wired to accept dream-memories as valid, why wouldn’t we send coma victims to Azaroth to kick some goblin ass as part of some cognitive stimulation therapy? At least they’d have something interesting to do.
What about The Matrix as Long Term Care Facility? Let me extend that interesting idea into my personal life experience…
My Mother was a long-term care resident at our provincial mental health hospital for many years. I’m willing to bet that if my poor Mum were able to choose between (A) stay in a semi-vegetative state with little physical activity and not much on TV, or (B) Be Dorothy in the Wizard of Oz (her favourite movie), she’d have gone for Option B and never looked back. And if I could have visited her on the yellow brick road instead of in the awkward, cold silence of a hospital visiting room, I know which choice I’d have made too.
This blog post and the embedded video, form my Digital Artifact , my personal response, to the MOOC “eLearning and Digital Cultures”. In this post, I’ll try to respond to the propositions it has put before me, and to the methods and patterns I’ve observed in it and in myself.
About the Video…
I didn’t set out to emulate “The Machine is Us” or any of those first-person, typing-on-your-screen responses to modern tech, but in retrospect, my video kind of looks like one of them.
But, the way it looks came about purely practically:
I wanted to use my voice. Maybe this was because the vastness of the MOOC classroom made me feel like it was difficult to be heard.
The MOOC is a heavily visual experience (all those videos, and scrolling of screens to read things), so my response had to be full of images and motion.
I knew it would be made up of some kind of collage of images, but I didn’t know I’d be sampling my own web surfing so directly. This was like a riff on the act of doing web-based research.
I wanted the video piece to look and feel a bit obscure, rough or hand-rolled, not perfectly trim and clean. Plus, time would be an issue, so I had to figure ways to do things live, and to move things around on the screen in real-time. Time was my enemy. I’d probably need to work fast.
I had a rough script, but was ready to improvise if need be.
How the video was produced:
The video came into being through a combination of digital and online resources, and coincidental, guerrilla production methods.
I’d originally thought about doing a Prezi or a slideshow as the format for my final piece, but after thinking about it for a while, I decided that those formats would either be too restrictive, or too over-used. I would definitely record something off my computer screen though – maybe using Jing…
My next concept was to create many little graphical clips – little cutouts – in Photoshop, and move them around on Photoshop’s artboard, like little 2D puppets on a digital “stage”. (Maybe the “Bendito Machine” video had influenced me subconsciously?)
As the deadline approached, the prospect of capturing and clipping dozens of graphics – maybe even one hundred – seemed hugely impractical. I needed a more immediate, more rapid way to get my idea across. I decided to try to stay with the “stage” idea, but move bigger and fewer pieces of art around.
I built a simple Photoshop project that used a soft-edged rectangle, like a soft viewport or blurry camera iris. I decided that the first few moments of my story could represent a frame of my expectations – the fuzzy edges might stand as a visual metaphor for the uncertain boundaries of my expectations, or the blurry boundaries that I perceived to be the student parameters of the MOOC itself.
Beyond that, I had a number of concepts that I’d thumbed into my smartphone during a coffee break. I knew the story would trace a line through the content that I’d experienced thus far, and through my reactions to being a MOOCer, in general.
I set up a small 640 x 480 rectangular area on my screen to record, and I abandoned Jing in favour of its “big brother” app, Camtasia Studio.
This became as much of a temporal collage as it was a spatial collage.
As soon as I got to record the first web page in the video (in this case the front of edcmooc), I decided to abandon the Photoshop artboard “stage” altogether, and just grab whatever I could online to tell the narrative I had sketched out in Notepad. I would just capture whatever I could in my browser (making elements bigger so they better filled the screen and the user’s field of view), and use whatever images I could find on the fly from the web.
I began recording, and would pause from shot to shot, to change what content would appear in the little 640 x 480 capture area. This allowed me to create the whole sequence in chunks of one minute or so, or sometimes as brief as a few seconds. This gave me the freedom to work rapidly and change things on the fly, spending 10 or 15 minutes between “takes” to select and compose what would go in the next little sequence, or consult my little script (which you see me doing in the video), and practice or re-do my audio narration.
The music track was from a creative commons source, and any coincidences of images and sounds (like when an image appears right in time with a strong drum cue or something) is purely and wonderfully coincidental.
So, there was some predetermined design, and there was some random chance, and some on-the-spot improv, which felt very liberating. There was a logistical framework in some of the preparation, and most especially there was a definite mental framework in all the concepts which had been interconnecting in my mind over the past few weeks.
But it was truly recorded as a sequence of brief little live performances. Recording and editing the initial 12 minute “draft” version of the video probably took me five or six hours. The next day, I emailed and tweeted the YouTube URL around to get some feedback, and then spent another hour later that night tightening up the editing, adding graphics, and refining the music volume.
Then, I spent another few hours working on this blog post, in order to try to explain (and rationalize) it all…
What my Digital Artifact probably says about my experience…
…is that after the first few weeks, I think I responded more to the process of MOOCing, of being a student in a MOOC, than I did to the actual propositions put to me by the course facilitators and the course content. I always have been a bit more interested in process rather than product. I think that working in relative isolation, with only a vague feeling of online “connectedness” to instructors or colleagues, tended to make me turn inward more and more. Instead of reaching outward to collaborate with my online classmates or facilitators, I turned inward and did a more personal analysis of the internal learning and thought processes which had been triggered – some of which from twenty five years earlier! I think that’s what my artifact communicates: my reactions to the process in which I was immersed.
I enjoyed creating something that moved and contained more than one mode of apprehension (i.e. voice + video + music). I think that I ended up responding to those same qualities in the MOOC content…
The little animated chunks of video, which delivered little windows into someone else’s world.
The relentless reading and scrolling and clicking to get from idea to idea (an animated experience in itself).
What does my experience reflect? Is it useful to the MOOC itself?
A friend and fellow classmate in this MOOC told me that being in it felt a bit like being in art college all over again. I must totally agree with that statement: that is very much how it felt for me as well. And for me, that’s a good thing.
But, is it useful information to the facilitators of this MOOC or to the developers of the versions of it that will come after it? Just what kind of teaching and learning have we been undertaking here in MOOCland, and what are those Masters students in the U. of Edinburgh getting from studying this massive online learning experiment? And what does Coursera get out of it?
What is a MOOC, after all?
Is it just Edutainment, as some people fear?
Is it a new excuse for more web surfing and social media?
Is it actually some yet-to-be-validated form of social learning?
Those questions will take me much longer to answer.
Also, yes, I’m tooting my own horn on this post: one of my illustrations was actually used in this Prezi. It had been my entry into the MOOC’s “make an interesting image for Week 2” competition. I never won enough “likes” or whatever on Flikr to win the prize, but seeing my illustration used as a slide in this Prezi is prize enough for me.
For an assignment for the MOOC, eLearning and Digital Cultures, I created my first Prezi…
It’s my little abstract reaction to the bewilderment of feeling lost inside a 40,000 member Massive Open Online Course.
Themes explored this week included technological utopianism and dystopianism, and the idea of technological determinism.
I watched these videos:
Video: “Day Made of Glass 2” (Corning)
The “Glass as lifestyle” approach is somewhat corporate wishful thinking, IMHO, and relies too much on groovy futuristic sci-fi touch interfaces to make the glass medium look exciting. Tinting windows? Sure. Use my bedroom window to help me decide what to pull out of my closet that is only a few feet away? Fat chance.
A massive sheet of glass in the middle of a demonstration forest would never be that clean and perfect.
I’m sure it would also be dangerous for the wildlife (dead birds having crashed into it all the time = scary discoveries for young girls).
In the classroom, students are just well-behaved passive recipients of the Teacher’s initial presentation, with nobody raising their hand to ask a question or ask to go to the bathroom. In classrooms today that use interactive whiteboards, students are often encouraged to come to the front and move images around as part of the lesson. Why do presentation and participation (at the beautiful touch-table) need to be presented as a group activity? In the Corning classroom, students are depicted and treated mainly as one group/collective. Is this a (subconscious) corporate wish for collective harmony? It’s okay for the kids to pick their clothes or to colour Dad’s dashboard full of hearts – that’s harmless kid stuff – but beyond that, personal expression or individuality seem muted in Corningland.
The glass-based solar array on the school roof was a nice image, but they could have done more to humanize their mission, and embrace corporate social responsibility. Like, why not show a kick-ass interactive graffiti wall donated by Corning to some local Community Centre?
Also, why are the young girls private school students? Is that a value judgement about an educational utopia? Does that mean that Corning’s utopian vision would only be available to the upper class and rich medical specialists like the Dad? That would leave something of a dystopian “plexiglass” reality for the lower classes, I guess… 😉 Definite technological determinism there, not to mention class-ism.
Video: “Productivity Future Vision” (Microsoft)
In Microsoft’s vision, paper seems to have disappeared, replaced by flexible touch-sensitive surfaces. Hard for me to accept that. Paper will still remain cheaper than plastic, for at least the next 10 years and more ecologically friendly, forever. I noticed that keyboards are still around in Microsoft’s future vision, at least in the office when one is preparing the annual report (or whatever that dude was doing).
Apparently, nobody at home or work is concerned about any repetitive stress issues from having to do all those large arm motions to swoosh images around on all those massive interactiuve surfaces. How many overweight CEOs are going to throw their back out trying to clear all the virtual files off their ginormous desk-walls?
This idea that all surfaces will be interactive and high-res is completely fantastic – a utopian vision and obvious excuse to demo Microsoft’s Surface technology. It is technologically skewed towards the vendor-manufacturer’s wet dream of an ideal consumer family.
Last year, I read an astute saying that said “If you didn’t pay to use a service, then you are the product being sold”. I feel like that kind of “buyer beware” maxim could be applied to ease-of-use in information technologies too. Here’s what I mean…
If a technology tool or platform is popular, we could say that, in part, because it’s easier to use than the competition, the usability aspect of its design was likely a core business strategy. Hardware designers might talk of “build quality” and ergonomics – it’s all about usability.
Today, usability is deeply integrated into product design and marketing. For example, let’s take the rise of tablet computing platforms – most popularly, the Apple iPad. Many users who are new, or technologically-intimidated, or very young or old, will likely have an easier time using a touch-tablet like the iPad than they would using a desktop computer. Compared to the user experience of manipulating a mouse and keyboard on a desk to manipulate objects on a screen, touching your finger to a screen on a tablet (primarily one that has an OS that is designed for touch use) is much easier for a new or unfamiliar user. You don’t have to “get used” to using a mouse (i.e. training yourself that a wrist movement of a few inches from left to right across your desk will translate into a one-foot left-to-right motion of a pointer on the screen in front of your face). This basic aspect of the windows-mouse-icon-pointer interface is actually a barrier to use: a new user must practice a little bit before they can easily manipulate graphical objects using a mouse.
In this regard, smartphone and tablet-based computing have been absolute game-changer technologies for many people. Apple and many other manufacturers knew this, and were waiting for touch-screen technology to become sophisticated and inexpensive enough to bring to the mass market.
These devices are used to access many free and for-pay information and media services. People don’t really think about the way it is – they just want to be able to use these devices – these new gadgets – to get at the news, music, movies, or games that they want. Corporations seem to have taken a cue from the original “information on the Internet should be free” ethos that evolved through the 70s, 80s and 90s, and subverted it by making books, apps and games available on tablets for only a few dollars, or even for free. Buying an iPad game that will give you dozens of hours of fun will cost you about the same as a pack of bubble gum. That’s one barrier gone. After you download it, you can use it right away – installation is usually fast and minimal. That’s another barrier gone.
From a business perspective, making a platform easier to use (usability), and making the purchase process easier to complete (one-click fulfillment) and easier to justify (cheap or free) will easily result in more purchases. Amazon’s “One-click” purchase button was the first place I saw this kind of supermarket checkout “impulse purchase” tactic at work. I had disposable income, and Jeff Bezos and Amazon made it extremely easy for me to dispose of it on a whim. I could “impulse buy” a thirty dollar hardcover book with even less effort than it would take to grab a candy bar at the checkout aisle at Safeway. Tablets with apps and books that can be bought for under a dollar, while you’re laying in bed at night, are about as convenient and impulsive as it gets.
It means that the end-user consumer must exercise some discretion and will power to avoid nickel and diming themselves down to a negative balance in their bank account. A high degree of usability in the device itself makes for a pleasing and satisfying user experience, and ubiquitous cheap online products in a “one-click marketplace make it deceptively easy to please the vendors.
So, if it’s too easy to use, be careful. You might use it too often.
Explorations in learning, ideas, and design by E. John Love