Colin's Sandbox

Differentiating the Process at ASTE 2013

by on Feb.27, 2013, under #diffimooc, Digital Storytelling

ASTE 2013
I _just_ finished up with the ASTE conference and although I gained a lot from this, I’m a bit fried. I focused mainly on tools, products, and methodologies that deal with Augmented Reality, gamification of education, Minecraft (and it’s custom offshoot, MinecraftEdu), and sharing stories with all stripes of educators. It was the latter that I think had the most lasting impact: conversations lend value to those feebly formed ideas running around upstairs and suggest future project ideas.

Although throughout my experiences at ASTE the concept of “differentiation of process” could be demonstrated I’ll limit myself mainly to a few items that really hit home with me. Although I’m very excited about using potentially using MinecraftEdu as a portion of our group’s simulation project, I won’t get into MinecraftEdu too much in this post because I know that I’ll need more space and time to do it justice. I will say though that the two Minecraft sessions I attended were packed. If you were a presenter and you needed to fill some seats you could have just slapped “Minecraft” in your presentation’s title and you’d be fighting for air.

Spanish Language Video Projects in Haines
I had a brief lunch time conversation with a teacher out of Haines, AK and the talk steered towards using cameras in their Spanish language classes. Now I searched briefly around on YouTube for them but either my GoogleFu is weak or the videos are unlisted, so I apologize, the suspense is killing me. I’ll update this post if I can dig up a URL on it. Basically my understanding is that as part of their curriculum students are tasked with creating short video projects that demonstrate some scene. The example that I remember hearing about was of a traveler at a restaurant who ostensibly were ordering food but who really ended up just flirting with the mesero (¿o mesera? No se). The point is that this design brings some neat elements into play here that grab me:

  • Some sort of authentic story. Simply sitting down and ordering food isn’t that compelling. I don’t remember too many details when I set about ordering that pizza 6 hours ago. I do, however, remember a very similar scene 7 years ago or so when I was traveling through San Miguel de Allende in Mexico.
  • Real world transferability. If you travel south of the border, sooner or later you’re going to have to eat.
  • Potential for publishing via a video sharing service such as YouTube. To me this is a positive step in the right direction. I think the easiest form of your ePortfolio is essentially a collection of published work that you link to in some curated fashion as you go along. Since the work is for public consumption the hope is that there is an greater incentive for spit and polish.
  • Similarity to workflows that students already understand. Many students already publish to YouTube. I would imagine most would be short, unrehearsed clips, but some students in your classroom may have gone through a more involved process involving writing a script, laying out the shot sequence, shooting, editing, revision, publishing.
  • Opportunities for collaboration with others.
  • Recorded audio that can be played back for students to provide feedback on pronunciation, grammar, the whole bit
  • I don’t know if the video produced utilized subtitles / captions or not, but the opportunity is ripe for being able to turn on or off the subtitle overlay to provide for popping out to other resources.

Knowing what the exact set of tools used in the creation of the videos isn’t that important to me; it’s a matter left up to the reader in my opinion depending on what they’ve got in house. What I’m learning from this and Dr. Ohler’s Digital Storytelling class is that it really doesn’t take much in terms of hardware to get the job done. Much more important is to focus in on story and engage those senses.

Augmented Reality (AR)
Augmented Reality in a nutshell is using technology (off the shelf: iOS / Android mobile devices, coming up soon: Google Glass) to alter the signal coming into your senses (audio / visual typically). Two main forms are in common use today and were demonstrated in sessions throughout ASTE: positional and target based. The positional method of doing AR involves using your GPS enabled device to change its display (or, perhaps somewhat annoyingly, to play a sound) based on your location and orientation. Check out the AR Planet Walk for an example of this. The target based variety was demonstrated in a few different ways as well – there were two different galleries on display in The Hotel Captain Cook allowing the viewer to see different content while pointing their device’s camera at the image. Examples of photographic stills, links to websites, and videos were all demonstrated in this fashion. Each station had a link to a Google Form that the viewer could use to answer a quiz. The whole show was better viewed on a larger tablet device in my opinion. Here’s an example video from the Captain Cook Augmented Reality Experience:

One of the sessions presented by Christen Bouffard demonstrated the process which used WordPress combined with a free plugin as the publication platform to feed in points of interest (POIs) into a platform called Layar to actually *do* the AR. In that vein I’ve been kicking around the idea of doing some sort of positional Augmented Reality project involving a nature walk for a while but haven’t put in the leg work to make it happen. In my vision you’d crowd source the image curation and POI submission from your students armed with some sort of handheld device. At one point during the presentation a big light bulb went off and I asked something to the effect: “So wait – students could have accounts through WordPress and submit their own POIs, and those submissions can be moderated?”, to which now I think: “Well, duh – the title of the session has ‘collaborate’ in it!”. To her credit, she was incredibly patient during the entire session as we all fumbled through in various ways. I left the room feeling like the way was clear for starting in on a demonstration sometime soon. I also appreciate the extra helping hands around the room from the University of Alaska Fairbanks.

So that’s great! But..how can these examples be useful to differentiate the process?
My take on differentiating the process is providing alternate media types to deliver content pitched at potentially different levels so that learners can choose a style(s) and starting point to suit their needs and skill level best. Restating this another way: incorporating videos, textual information, and audio work together to meet the learner where they’re at.

The video example I gave was more along the lines of differentiating the output, but you can imagine how using YouTube videos, with easy to add linkable captions, can be used effectively to supplement the more traditional text-based content, and is a technology whose concepts most people have a pretty good understanding of, if not the specific mechanics of shooting, editing, and publishing. Mobile devices with cameras are streamlining the process and can make quick work out of creating and publishing short videos.

Augmented Reality is more difficult to wrap your head around at first, but the tag-based CCARE exhibit did a good job incorporating textual, video, and audio elements into the various stations, essentially coming at you from all sides. It’s a little odd at first, and sometimes the content was a little clunky to come up but that could be because of the load on the Internet connection here at hotel.


Leave a Reply

Looking for something?

Use the form below to search the site:

Still not finding what you're looking for? Drop a comment on a post or contact us so we can take care of it!