Archive for August, 2013

Observations on (final) Week 7 of Usability II

August 18, 2013

This week’s assignment for Usability II was a culmination of things we’ve been working on for at least the last two weeks or longer.

 Our final eyetracking report was due, based on viewing 5 separate eyetracking sessions as well as heatmaps and gaze plots. This part of the assignment truly brought home to me the almost Jedi-level of competence required in order for someone to thoroughly and accurately de-code eyetracking, heat map and gaze plot data. It reminds me very much of what I’ve seen of satellite photo image analysis. They call it analysis but it seems much more like interpretation to me, with the possibility of huge margins of error (is it a hospital or is it a biological weapons manufacturing plant?).

 I felt the same way looking at eyetracking, heat maps, and especially gaze plots. It’s so voodoo to me that I feel like I have no business even trying to comment on what I see because I think it takes years of training to get even close to being accurate at this type of interpretation/analysis.

 But, despite all this, I’ve re-affirmed, again and again, this is the field for me. Even when it’s hard and I don’t feel competent to make a good call, I still enjoy the effort, the readings and the learnings.

 I have high standards of accomplishment. I’ve gone through law school, passed the bar exam, and I feel the same sense of intellectual and academic challenge in this class that I felt in law school, with the added (finally!) enjoyment and sense of pleasure in what I do. Doesn’t get much better when it comes to education.

All that glitters

August 11, 2013

I just re-watched Rain Man and realized that a lot of that movie can be viewed as a study in eyetracking. Eyetracking in scenes intended to be real life as opposed to a website, but eyetracking nonetheless. Many of the scenes are montages of what Dustin Hoffman’s character’s eye is attracted to. Many of the images are fascinating for their inhuman, mechanical repetition, speed, and play of light. But, this is not the main story, and it takes director Barry Levinson’s focus to include the eyetracking, but also build it coherently into the story.

I like what I’ve learned about eyetracking so far. First, that it seems that it has to be viewed as a real art form in order to be done well. Ever since I read Daniel Pink’s A Whole New Mind, I’ve been on the lookout for things that can’t be learned by rote, are difficult to automate and require qualitative rather than quantitative analysis. Quantitative analysis still might be a part of eyetracking, but I like how Aaron reminded us in the intro video for this week, that statistically valid quantitative results aren’t necessary to learn a lot about a site when using eyetracking.

The biggest problem I see with eyetracking, heatmaps, gaze plots, etc. is the cost. It’s expensive to get the equipment, and even more expensive in terms of additional salary, to get people who know enough of the “art form” side of the studies to make good use of someone (assuming you’ve even got someone doing UX hiring who at least knows what questions to ask, in order to find, in Pink’s terms, a whole new mind – or at least a right-brained, UX person) who can work with the technology.

Otherwise, I think you end up with a lot of participants who, while in an eyetracking study (I know I’m like this), act like Rain Man in the Las Vegas casino scene. Everything is very sparkly, definitely very twinkly. And without the skills to know whether the sparkly and the shiny are important or not to the overall story, you may end up with results that go nowhere, just constantly trying to figure out who’s on first.

Gestural interfaces for mobile

August 4, 2013

I like the exposure we’re getting to the usability of mobile devices. I mentioned in a previous post that I didn’t know much about designing for mobile and now I feel that, although by no means am I an expert, I have a better handle on the topic and some of the problems it can cause, as well as some design methods to deal with them.

It surprised me that, with the current popularity of using mobile devices to access the web, that there isn’t already good usability and gestural capture software. Someone will make a lot of money if they solve that problem well.

For now, it does seem that gestures are deeply embedded in the use of mobile devices. Still, it seems like the same problem I mentioned in last week’s post: the devices are almost all very limited in how much they can allow for (an average) 10mm finger tip to do.

We had a speaker a while back at Intuit and his team had started building an app that was literally based on the one from Minority Report. The funny thing was that, luckily before they had put too much time and money into it, while trying to use paper prototypes on a whiteboard to simulate the gestural movements of Tom Cruise, everyone’s shoulders got way too tired, way too quickly and they had to scrap the idea. I guess even with “simple” gestures, you sometimes have to go too far before you know where the limits are.