Back in the late 80s and early 90s, there was a term called “The Computer Widow”. This referred to the wives who hardly ever saw their computer-obsessed husbands, except from the back.
It’s a morbid metaphor, but served a purpose: obsession with computer-based work or distractions took time away from relationships, leaving wives feeling bitter, abandoned and effectively “widowed”. (This also speaks to the predominantly male-oriented computer and web culture that has more and more opened up to gender equity as the years have passed.)
I’m sure there were Ham Radio widows in previous generations, or partners of inventors or hobbyists whose work was obsessive and nature and revolved around a stationary set of tools.
Now that large desktop computers and wired network connections have been largely replaced by ubiquitous wireless handheld devices, our behaviour and expectations are different.
Since I got my first fully web-enabled smartphone in 2009 (a Palm Pre), I began breaking a 10 year habit: instead of checking my email and surfing the web at my desktop PC each night, I began reading online news and managing my email on my smartphone multiple times per day. This has sometimes caused me to be one of those distracted people, reading my emails in the car or in bed at night, but generally, I think it’s been a huge improvement in terms of convenience and access. Now I only sit at my PC once or twice per week, and when I do, I’m amazed at how few messages come through to my desktop Inbox. I’ve been doing all my email reading, managing and deleting from my phone or sometimes my tablet. Those mobile devices have become my access points, and come with me to bed, the bathroom or in the car, and most of the time, this is an absolute convenience. I do think that my wife is feeling much less widowed in 2013, than she might have felt back in 2000. Now, we both compute and communicate wirelessly, and we can do it together at a coffee shop, chatting and commenting (or at least acknowledging each other) while we tap away at our respective work or hobby projects.
For me, mobility has definitely improved and alleviated the technology “widow” factor. Is this the same for others? Does being preoccupied in other locations or on the road make the preoccupation less of a problem? Does it allow busy people to get on with their lives, moving from task to task, or to different social situations, while staying connected or productive online?
Or, does it just allow us to be distracted by cyberspace while risking social dysfunction in real-space?
Fragmented and Unrecognizable Contexts
In the pre-mobile days, the context for an activity was largely recognizable by physical location, or unambiguous use of a particular device. In the analog world, you used a radio to listen to airborne audio, and you used a telephone for person-to-person voice communications. Other people could see that you were on the phone, or hear and see that you were listening to the radio.
Ubiquitous mobile (and soon wearable) computing and wireless communications makes this third-party recognition much more difficult: you may have to deal with lunch mates who are repeatedly distracted by their phones, or send text messages or tweets while they’re supposed to be paying attention to that fascinating story you’re relating about your dog. It’s hard to tell if someone who’s talking to themselves as they walk down the street is schizophrenic, or having a phone conversation on a bluetooth earpiece.
As ubiquitous computing and communications evolve and the boundaries between man and machine become less distinguishable, it’s going to get weirder and ore difficult to recognize when you are being conversed with or interrupted by another person.
I must admit that I’ve sometimes asked myself a question very similar to this. I’ve asked myself “Am I losing my short-term memory?” or “Am I losing my ability to concentrate for long periods of time, or to read long passages of text in one sitting?
I do believe that over the years of web surfing (which I’ve been doing since 1994 or 1995, when the first browsers became widely available), my ability to concentrate or my pattern of reading – the way in which I consume words – has been modified by the activity of surfing online. I do feel as if the hypertext, hunt ‘n click web has modified my behaviour. I can feel that I’ve become more of a browser than a reader.
Has Using the Web Trained me to Click Instead of Read?
It’s a fair question, as if the online world of information is like an endless, all-you-can-eat buffet. I may be in line to put together a meal from beginning to end, but the act of gathering what I need comes in little chunks, with possibilities for distraction at each new connection point. I’ll take a little bit of one site, a little bit of the next, etc. etc. Skip, skip, skip. Click, click, click. It’s more like an endless stream of consciousness, and it’s easy for me to get drawn off-course from an original train of thought onto something completely different. I think it must be the combination of my own curiosity and the seemingly endless array of links to other destinations.
But There’s a Physical Difference to Reading Online too…
I have always read, and I still love to read – novels, magazines, comics and graphic novels, and now more than ever, news and current events. But, I find reading from an LCD display to be much more difficult than reading from paper. Consistently more difficult.
I read a lot of text online, but it doesn’t mean that I’m no longer capable of reading a novel 0n paper. I love reading paper books and magazines (and even the occasional newspaper) – I’m just not quite to used to it as I was before.
So, I think the physicality of reading off a back-lit display of pixels (i.e. teeny little Light Emitting Diodes), combined with the click ‘n browser nature of hypertext brings me to a McLuhan-esque “Medium is the Message” realization:
I’m not getting dumber because of the Web, but I do think that the Web itself makes me read in a shallow way.
I think the over-arching message this week is that our concept of humanity has become a relative and subjective thing. These videos explore that idea in different ways and different genres.
Robbie tells the story of a space-faring android who is the last occupant of a space station orbiting the earth. I could easily tell that this film was composed entirely of stock footage, but then again, how easy would it be to shoot your movie on the space station (or a realistic, earth-bound mockup). Nonetheless, the repetitive, stock footage appearance of it put me off a bit. Aside from that, Robbie is an engaging tale about survival, loneliness and angst from the perspective of an artificial intelligence.
I don’t know if 4000 or 6000 years of feeding its neural net with information would result in an android that would have dreams – literally flights of fantasy – and not for one moment did I buy the premise that Robbie wanted to be Catholic.
I’ll say that again: Catholic. I’m not anti-Catholic or anything, but such a specific choice of religion seems out-of-place. Is the author of this piece likening Robbie the Robot to Jesus, by virtue of his symbolic impending death (and do we presume, rebirth?)
My expectation of an autonomous, artificial intelligence would be that it would be somehow more neutral, probably atheist or maybe humanist. It either wouldn’t believe a religion or perhaps it would believe in the species which created it. Okay – so, I’m an atheist and I have a hard time with that aspect. I’ll leave that point alone, and get on with it.
No – I just cannot leave the religion aspect alone on this one…
The idea that a robot with what we consider to be A.I. would care about one religion over another probably says more about the film maker’s attempt to imbue his protagonist with some kind of “soul”, so that the viewer will empathize with him. “If the robot wants to believe in God, then he must be more like me than I thought. If he could consider accepting God as his creator, then he must have a higher level of enlightenment, just like a human.”
If, however, Robbie were to possess the actual mental engrams of a former human being – if a human being’s actual thoughts and personality could be transferred into Robbie’s memory and mechanical frame – then THAT would convince me to feel sympathy for Robbie’s plight (his curse of immortality).
But so long as I believe that Robbie possesses a 21st century version of artificial rationale, I can never consider him conscious, and so I will never accept him for much more than a glorified electric screwdriver left behind by a space workman. How cold-hearted am I? I just didn’t buy into this movie’s attempt to tug my heart strings.
Gumdrop was a sweet little comedy, and a gentle visual sleight-of-hand. By substituting a young human actor with an android auditioning for an acting job, we end up starting to think about the values and hopes of the young actress, mechanical or not. Gumdrop was a light-hearted examination of the casting call too: do we treat each other like commodities or machines? Does the audition process demean the female actor? Should human actors be worried, now that we live in a world where lots of supporting and lead characters only exist in an animation database, but never in the physical sense?
Gumdrop’s vacuum cleaner gag was very funny. But, does that mean she’s really just a glorified Rosy the Robot? What happens when the acting career is finished, or when she outlives her warranty? Will she get literally dumped on the scrap heap?
For some reason, I care about Gumdrop more than Robbie. Maybe it’s the human motion and voice. She’s much more likeable than Robbie. Like they said in Pulp Fiction, personality goes a long way.
True Skin is an extremely well-made, and convincing film. Very Blade Runner-esque. Great Raymond-Chandler-inspired dialogue. “Their eyes burned holes in my pockets” was a brilliant line.
So, the one thing all these films have in common is that they live or die by the quality of the plot and the dialogue. Yay, human writers!
In terms of the humanity proposition of this week, I think this film does the best job of articulating some major issues:
If there comes a time when we can no longer define or recognize humanity by its fleshiness, will it still be considered human? Is a cyborg who is less than 50% flesh and bone still a human being? Maybe the more metallic and less meaty we become, the less human we will be perceived to be. Ben Kenobi said of Darth Vader: “He’s more machine than man now, twisted and evil.”
On a personal level, if a friend of mine had their thoughts transferred into a little computer, and I could interact with them (either text, or maybe Max Headroon style on a display screen), would I still consider them human? Probably not, if I could put them into Standby Mode, or turn them off, like any other device. So, maybe autonomy and self-preservation are other key aspects of being a sentient being?
I loved Avatar Days. The simple concept of transplanting a fantasy persona into the owner’s real-world life and society is an extremely powerful thing. It’s done so matter-of-factly and carefully that it becomes a real artistic social statement. Coolest of all, it’s contemporary. You can get immersed in World of Warcraft or Second Life and become a sword-swing, spell-packing nerd of Azaroth today.
I’ve played around in Second Life a bit in the past (reporting as “Earnest Oh”), so I can appreciate the appeal of being able to put on that second skin and walk around (or remove it and assume the position, in a lot of people’s cases… yeesh, people). It makes you wonder about the boundary between fantasy and reality for one thing. I read somewhere, that internally, your brain does not distinguish the difference between a memory of a real event, and a memory of a dream. They’re both equally valid as memories, even if one of them didn’t occur in the physical world. So, if our brains are already wired to accept dream-memories as valid, why wouldn’t we send coma victims to Azaroth to kick some goblin ass as part of some cognitive stimulation therapy? At least they’d have something interesting to do.
What about The Matrix as Long Term Care Facility? Let me extend that interesting idea into my personal life experience…
My Mother was a long-term care resident at our provincial mental health hospital for many years. I’m willing to bet that if my poor Mum were able to choose between (A) stay in a semi-vegetative state with little physical activity and not much on TV, or (B) Be Dorothy in the Wizard of Oz (her favourite movie), she’d have gone for Option B and never looked back. And if I could have visited her on the yellow brick road instead of in the awkward, cold silence of a hospital visiting room, I know which choice I’d have made too.
This blog post and the embedded video, form my Digital Artifact , my personal response, to the MOOC “eLearning and Digital Cultures”. In this post, I’ll try to respond to the propositions it has put before me, and to the methods and patterns I’ve observed in it and in myself.
About the Video…
I didn’t set out to emulate “The Machine is Us” or any of those first-person, typing-on-your-screen responses to modern tech, but in retrospect, my video kind of looks like one of them.
But, the way it looks came about purely practically:
I wanted to use my voice. Maybe this was because the vastness of the MOOC classroom made me feel like it was difficult to be heard.
The MOOC is a heavily visual experience (all those videos, and scrolling of screens to read things), so my response had to be full of images and motion.
I knew it would be made up of some kind of collage of images, but I didn’t know I’d be sampling my own web surfing so directly. This was like a riff on the act of doing web-based research.
I wanted the video piece to look and feel a bit obscure, rough or hand-rolled, not perfectly trim and clean. Plus, time would be an issue, so I had to figure ways to do things live, and to move things around on the screen in real-time. Time was my enemy. I’d probably need to work fast.
I had a rough script, but was ready to improvise if need be.
How the video was produced:
The video came into being through a combination of digital and online resources, and coincidental, guerrilla production methods.
I’d originally thought about doing a Prezi or a slideshow as the format for my final piece, but after thinking about it for a while, I decided that those formats would either be too restrictive, or too over-used. I would definitely record something off my computer screen though – maybe using Jing…
My next concept was to create many little graphical clips – little cutouts – in Photoshop, and move them around on Photoshop’s artboard, like little 2D puppets on a digital “stage”. (Maybe the “Bendito Machine” video had influenced me subconsciously?)
As the deadline approached, the prospect of capturing and clipping dozens of graphics – maybe even one hundred – seemed hugely impractical. I needed a more immediate, more rapid way to get my idea across. I decided to try to stay with the “stage” idea, but move bigger and fewer pieces of art around.
I built a simple Photoshop project that used a soft-edged rectangle, like a soft viewport or blurry camera iris. I decided that the first few moments of my story could represent a frame of my expectations – the fuzzy edges might stand as a visual metaphor for the uncertain boundaries of my expectations, or the blurry boundaries that I perceived to be the student parameters of the MOOC itself.
Beyond that, I had a number of concepts that I’d thumbed into my smartphone during a coffee break. I knew the story would trace a line through the content that I’d experienced thus far, and through my reactions to being a MOOCer, in general.
I set up a small 640 x 480 rectangular area on my screen to record, and I abandoned Jing in favour of its “big brother” app, Camtasia Studio.
This became as much of a temporal collage as it was a spatial collage.
As soon as I got to record the first web page in the video (in this case the front of edcmooc), I decided to abandon the Photoshop artboard “stage” altogether, and just grab whatever I could online to tell the narrative I had sketched out in Notepad. I would just capture whatever I could in my browser (making elements bigger so they better filled the screen and the user’s field of view), and use whatever images I could find on the fly from the web.
I began recording, and would pause from shot to shot, to change what content would appear in the little 640 x 480 capture area. This allowed me to create the whole sequence in chunks of one minute or so, or sometimes as brief as a few seconds. This gave me the freedom to work rapidly and change things on the fly, spending 10 or 15 minutes between “takes” to select and compose what would go in the next little sequence, or consult my little script (which you see me doing in the video), and practice or re-do my audio narration.
The music track was from a creative commons source, and any coincidences of images and sounds (like when an image appears right in time with a strong drum cue or something) is purely and wonderfully coincidental.
So, there was some predetermined design, and there was some random chance, and some on-the-spot improv, which felt very liberating. There was a logistical framework in some of the preparation, and most especially there was a definite mental framework in all the concepts which had been interconnecting in my mind over the past few weeks.
But it was truly recorded as a sequence of brief little live performances. Recording and editing the initial 12 minute “draft” version of the video probably took me five or six hours. The next day, I emailed and tweeted the YouTube URL around to get some feedback, and then spent another hour later that night tightening up the editing, adding graphics, and refining the music volume.
Then, I spent another few hours working on this blog post, in order to try to explain (and rationalize) it all…
What my Digital Artifact probably says about my experience…
…is that after the first few weeks, I think I responded more to the process of MOOCing, of being a student in a MOOC, than I did to the actual propositions put to me by the course facilitators and the course content. I always have been a bit more interested in process rather than product. I think that working in relative isolation, with only a vague feeling of online “connectedness” to instructors or colleagues, tended to make me turn inward more and more. Instead of reaching outward to collaborate with my online classmates or facilitators, I turned inward and did a more personal analysis of the internal learning and thought processes which had been triggered – some of which from twenty five years earlier! I think that’s what my artifact communicates: my reactions to the process in which I was immersed.
I enjoyed creating something that moved and contained more than one mode of apprehension (i.e. voice + video + music). I think that I ended up responding to those same qualities in the MOOC content…
The little animated chunks of video, which delivered little windows into someone else’s world.
The relentless reading and scrolling and clicking to get from idea to idea (an animated experience in itself).
What does my experience reflect? Is it useful to the MOOC itself?
A friend and fellow classmate in this MOOC told me that being in it felt a bit like being in art college all over again. I must totally agree with that statement: that is very much how it felt for me as well. And for me, that’s a good thing.
But, is it useful information to the facilitators of this MOOC or to the developers of the versions of it that will come after it? Just what kind of teaching and learning have we been undertaking here in MOOCland, and what are those Masters students in the U. of Edinburgh getting from studying this massive online learning experiment? And what does Coursera get out of it?
What is a MOOC, after all?
Is it just Edutainment, as some people fear?
Is it a new excuse for more web surfing and social media?
Is it actually some yet-to-be-validated form of social learning?
Those questions will take me much longer to answer.
These usage statistics were provided by faculty from Edinburgh University, who are running the E-Learning + Digital Cultures MOOC on Coursera:
Total Registered Participants: 42874
Active Participants Over the Last 7 Days (“Active” is define as any contact with the EDCMOOC Coursera course site): ~ 17%
EDC MOOC News (Blog Aggregator) Unique Visitors: ~10%
Visitors to the EDCMOOC News page come ~65% from the USA and ~ 8% from the UK.
Other stats about this MOOC:
For about 70% of the group, this is their first MOOC. About half are currently enrolled on only one MOOC.
About 24% of respondents from the USA, ~ 9% from the UK, ~ 6% come from Spain, and ~ 3% from both India and Greece.
About 60% of respondents come either from “teaching and education” or report themselves to be “students”. Just over 60% of the entire respondent group have postgraduate level qualifications, and a further ~35% have a university or college degree.
In the videos I’ve seen so far in Week 3, the idea of humanity is brought to the foreground primarily by the absurd or hyper-extended context in which each story is framed.
As a metaphor for what I mean, imagine you place a small area of light grey colour on top of a large black background. On black, the light grey will look much lighter than it actually is. In fact, people might interpreted it to be white.
That’s what these videos appear to be doing: creating a non-human, artificial or alien (spoiler alert!) tone or context, which brings out our internal concept of humanity in sharp relief. Unfortunately, they also bring out my cynicism in even sharper relief.
This somewhat shallow Toyota ad riffs on the idea of what today’s viewer would consider “CG” – a 3D representation that approaches the level of an interactive 3D video game, such as “L.A. Noire”. The message is insultingly simplistic: “Toyota is the real deal” [*snore*]
What I find more interesting is the fact that most younger viewers will totally be able to agree as to the “unrealistic” 3D graphics in this commercial. They’ve grown up in the era of HD and awesome frame rates.
I was born in 1966, and I suspect that my generation will be less likely to find as much fault with the quality of the “unrealistic” renderings. Maybe my generation would pass on the real deal Toyota, and drive our chunky, pixely KIAs or Yugos around in 3D land and still have a great time. I guess Toyota is pandering to the 25 year-old driver in this case, and I’m somewhat irrelevant.
This British Telecom ad makes the point about human contact by showing a family that interacts exclusively via text and social media. They don’t even seem to know how disconnected they truly are from each other, until BT points it out of course. Poor buggers.
This ad is basically “reach out and touch someone” all over again. (Does anyone remember that ad campaign from the 1970s?) Poor consumers. At least this commercial has a message promoting some kind of “more human” connection to it. The idea is that real-time voice comunication – the good ol’ phone system – is more human than texting or social media. I tend to agree with this sentiment, although ironically, I’d be using my social media as much as anyone, for the sheer convenience.
Most of the telecom commercials I’ve seen portray families that seem to need infinite minutes, massive data plans and constant texting. It always shows family members enjoying their digital lives, away from each other in separate rooms, not conversing or connecting or even acknowledging each other.
World Builder is a bittersweet fantasy. My initial response during the first few minutes was “Self satisfied 3D modeler plays God creating his perfect little 3D world = adolescent male power fantasy = So what?”
But as the story unfolded to it’s final conclusion, it revealed a sweet moral of self-sacrifice, a dream-wish of happiness and freedom given from someone who has freedom to someone who hasn’t any.
The idea here is that technology can be a tool to humanize and liberate, and in this video, liberation and freedom are placed in the service of love and compassion, instead of in the selfish pursuit of pleasure or power.
For an assignment for the MOOC, eLearning and Digital Cultures, I created my first Prezi…
It’s my little abstract reaction to the bewilderment of feeling lost inside a 40,000 member Massive Open Online Course.
The MOOC I’m taking, E-Learning + Digital Cultures, continues to unfold in front of me, gradually showing me new perspectives and more detail. But it’s not for the impatient…
For me, being in a MOOC has felt like being seated inside a vast, unlit stadium where you can hear other attendees whispering and you can see their messages on the walls, but otherwise, they remain invisible. Getting acclimatized – even feeling welcome – does not come right away.
A few weeks later, this is still more or less my experience, but my eyes seem to have adjusted to the darkness now – I feel like I can see better and interpret more than before.
Gardner Campbell’s Open Ed 2012 keynote address hit me like a bolt to the brain… [It] made me feel inspired and energized to explore my own spaces between art, technology and learning.
In the Week 2 resources, under “Perspectives on Education”, the video of Gardner Campbell’s Open Ed 2012 keynote address hit me like a bolt to the brain: his passionate advocacy for truly open learning, his challenging definitions of what he felt it should be, and his support and appreciation for the interdisciplinary responses of his students – all of these factors made me feel inspired and energized to explore my own spaces between art, technology and learning. I think I may have found a new inspiration – someone to study more closely.
When I was in the Emily Carr College of Art + Design in the eighties, I learned about media theory (e.g. MacLuhan), multimedia and hypertext (e.g. Ted Nelson), and visual literacy and visual perception (e.g. Tom Hudson, Rudolph Arnheim, Johannes Itten). Some things I learned from reading books or watching videos, but a lot of information I got first-hand, from seminars, workshops and special research projects. The people I learned from in-person were all artist-educators who were actively exploring ideas through their own art practice or educational research, often using consumer tech on shoestring budgets.
Back in my days as a multidisciplinary art student and research assistant, my greatest personal challenge was to interpret and synthesize all the raw information, and later, decide how to express my experiences. Many of my extracurricular readings covered topics in AI, cybernetics, user interaction, and theories of learning and education. I was all over the place conceptually, and loved it. Science educators like Seymour Papert and Alan Kay caught my interest for their explorations with interfaces and user (student) interaction. I read about the MIT Media lab, and all its explorations into media, technology, art and science. I read articles from the ISAST Journal “Leonardo”, and learned about PhD-level multidisciplinary art and science research projects. A good deal of the theories and terminology was just over my head, but I had found an interesting, fertile territory to consider, in the intersections of art, education and technology. Convergence was just starting to happen, and it was a fascinating thing.
My multimedia instructor, artist Gary Lee-Nova, helped me understand the relationships between modern analog and digital media, perception and society. Gary talked about author William Gibson and the idea of cyberpunk way before it was popular. Research, exploration and personal development were fun back then.
My mentor back in art college, Dr. Tom Hudson, opened my mind to modernist Bauhaus art education patterns, and under his guidance, we updated and reinterpreted them by using desktop computer graphics programs to research visual literacy and drawing systems.
After graduating from Emily Carr’s four year diploma program in 1989, I opted to pursue computer graphics, animation or commercial design as my career path, instead of art education. Tom had, at some level, hoped I would continue pursuing art education as a career. I did teach computer graphics in night school for a few years, tutored art privately, and was an Artist-in-Residence in the Vancouver School Board, but I never went into education in a more formalized way, like by pursuing a degree.
After 20 years working in the commercial sector, bringing visual design services to software/hardware developers and business people, the exciting theoretical, creative aspects of my thinking felt as is they had atrophied and needed some dusting off. My Modus Operandi had become one of speed and economy: skimming the surface of the pond of ideas to get from questions to answers, and from initial request to practical deliverable, as quickly as possible. Any education I took from my graphics career was of a short-term, tactical nature. I learned what I needed in order to fulfill a particular short-term goal. In that kind of mode, there wasn’t much time or interest in theory.
Now, I’m employed in Vancouver’s largest vocational college, helping teachers to adapt their experience and materials into online courses. In a higher education institution, my perceptions and reactions have had to adjust to a more deliberate, thoughtful form of delivery: integrity over speed, and quality over quantity.
Now, it feels like I’m rediscovering the joy of the interconnectedness of ideas – a multidisciplinary approach to things. I’m fascinated to see some of the topical connections between Seymour Papert, Alan Kay and Gardner Campbell.
I can, and should, now enjoy taking a deep dive into topics, instead of just skimming the surface.
I admit to not always being the most successful critical thinker – I tend to want to believe the things I read, especially if they sound optimistic.
Having said that (and having read other articles that tout elearning and MOOCs as the next big thing to open up and democratize higher education), I admit that some of Mr. Shirky‘s opinions in this piece did cause me suspicion. I am wary of the for-profit world, and fairly cynical about why for-profit companies would offer any service for free. I believe that there’s a for-pay business model underneath a fairly thin veneer of “open access” and “free content”. Nothing is ever truly for free.
Themes explored this week included technological utopianism and dystopianism, and the idea of technological determinism.
I watched these videos:
Video: “Day Made of Glass 2” (Corning)
The “Glass as lifestyle” approach is somewhat corporate wishful thinking, IMHO, and relies too much on groovy futuristic sci-fi touch interfaces to make the glass medium look exciting. Tinting windows? Sure. Use my bedroom window to help me decide what to pull out of my closet that is only a few feet away? Fat chance.
A massive sheet of glass in the middle of a demonstration forest would never be that clean and perfect.
I’m sure it would also be dangerous for the wildlife (dead birds having crashed into it all the time = scary discoveries for young girls).
In the classroom, students are just well-behaved passive recipients of the Teacher’s initial presentation, with nobody raising their hand to ask a question or ask to go to the bathroom. In classrooms today that use interactive whiteboards, students are often encouraged to come to the front and move images around as part of the lesson. Why do presentation and participation (at the beautiful touch-table) need to be presented as a group activity? In the Corning classroom, students are depicted and treated mainly as one group/collective. Is this a (subconscious) corporate wish for collective harmony? It’s okay for the kids to pick their clothes or to colour Dad’s dashboard full of hearts – that’s harmless kid stuff – but beyond that, personal expression or individuality seem muted in Corningland.
The glass-based solar array on the school roof was a nice image, but they could have done more to humanize their mission, and embrace corporate social responsibility. Like, why not show a kick-ass interactive graffiti wall donated by Corning to some local Community Centre?
Also, why are the young girls private school students? Is that a value judgement about an educational utopia? Does that mean that Corning’s utopian vision would only be available to the upper class and rich medical specialists like the Dad? That would leave something of a dystopian “plexiglass” reality for the lower classes, I guess… 😉 Definite technological determinism there, not to mention class-ism.
Video: “Productivity Future Vision” (Microsoft)
In Microsoft’s vision, paper seems to have disappeared, replaced by flexible touch-sensitive surfaces. Hard for me to accept that. Paper will still remain cheaper than plastic, for at least the next 10 years and more ecologically friendly, forever. I noticed that keyboards are still around in Microsoft’s future vision, at least in the office when one is preparing the annual report (or whatever that dude was doing).
Apparently, nobody at home or work is concerned about any repetitive stress issues from having to do all those large arm motions to swoosh images around on all those massive interactiuve surfaces. How many overweight CEOs are going to throw their back out trying to clear all the virtual files off their ginormous desk-walls?
This idea that all surfaces will be interactive and high-res is completely fantastic – a utopian vision and obvious excuse to demo Microsoft’s Surface technology. It is technologically skewed towards the vendor-manufacturer’s wet dream of an ideal consumer family.
#edcmooc
Explorations in learning, ideas, and design by E. John Love