A Few Scenarios for Practicing UX

I’ve had some friends ask how they might go about getting a UX design job. The short answer is make things & develop a portfolio. The longer answer involves culturing a mentality and mastering process. But sometimes it’s hard just to think of things to make. Design is not art – it can’t just exist for the sake of existing. It must solve a problem. Let’s imagine up some problems to solve.

1. A library wants to engage with kids as they first walk into the children’s book department. They plan to install a series of large touch screens. What would the goals of the engagement be? How can the library experience be enhanced for kids aged 5-10 through using technology? Make assumptions, develop personas, interview questions, user story map, wireframes, comps, prototype.

2. An insurance startup is developing a mobile app that allows users to quickly log incident details, take photos, and capture all relevant details. Assume ‘green pastures’ for all technology needs and fabricate incidents to illustrate usage. What factors will be considered in establishing a user flow and interface? How can visual identity not only serve to make a brand impression but to facilitate user actions? Develop personas, interview questions, user story map, wireframes, comps, prototype.

3. A pet store / museum / aquarium is developing a tablet experience for kids / adults / customers to buy / learn about the live creatures on display. Choose a set of parameters and make reasonable assumptions. What are the viewer’s goals as they are using the tablet? What are they trying to achieve, and what does the store / museum want them to care about? How does the content (live animals) change the way you present information? Personas, wireframes, you know the drill.

4. Happy John’s Burgers is a revolutionary, human-less food experience. Each table is a 6-10 person food preparation center and users wield a custom joystick to select their preferences. HJB needs a menu experience that should incorporate gestures, facial / voice recognition, and brings custom precision to the ordering and paying experience. How does the software inform users what options are available? How does the human-less environment change the way the software is designed? What frustrations would users be likely to experience, and how might you provide solutions in advance? How can look and feel impact the customer experience?

5. You are in the year 2350 in a crowded, heavily polluted city, and most people sleep in sleek glass climate-controlled bed-pods. When lying down, a screen appears above you. How can these futuristic bed-pods-screens provide a useful, comforting experience that users will spend many hours in at a time? Using voice and gesture controls, users can control their environments. How else can these pods provide utility to users? Perhaps they also serve as an entertainment center, or as an external screen for the user’s personal devices. Wireframe this experience, low fidelity. Find billionaire, pitch, repeat.

Other useful activities for developing a UX mindset:

  • Organize the hell out of your grocery list. How are products dispersed through the store, what items are grouped together, and why?
  • Walk into the entrances of 10 stores in a mall, pause, and inspect. What user actions are visible from your standpoint? How much of the user journey is evident from that point – is at least the next step indicated, or some directionality given? How is this achieved? Apply this to supermarkets, amusement parks, hospitals.
  • Screenshot and tag examples of design precedent everywhere you see it. Icon styles, dropdown/flyout styles, interesting bits of layout or typography, compelling uses of color. While UI design often requires a good amount of originality, it’s unwise to reinvent wheels needlessly – rather, it’s critical to use existing norms and conventions enough so that users can immediately recognize elements and understand what’s going on.
  • For all of the scenarios presented above, revisit them through the lens of accommodating users with disabilities. Decent article about that here.
  • Each of these projects should take you two weeks or more. Rushing a process won’t make you learn faster. If you need more work, work on two projects. Separate your work into phases, and keep yourself constrained. Get a Gantt chart going if it helps. If you run out of work to do in a phase, start a new sketch document and try to wireframe Wikipedia’s home page without looking at it. What do you think the designers at Wikipedia chose for the home page to present? What do they want people to use it for, and what would users want to use it for? What assets does Wikipedia have to display? What does Wikipedia want from its users, and how do they obtain it?

Hope this helps someone!

SCIFI FUTURETECH & DESIGN

I love science fiction. Literature, film, imagery, etc. For me, sci-fi is an amazing playground where the concerns of today can be extrapolated and imagined into fantastical realities, technologies, and moralities. It excels at the parable. Lessons that could be divined by dissecting complicated webs of existing real-world circumstances can be neatly reduced and made entertaining through the creation of alternative realities. As Sophia Brueckner of MIT writes, “Reading science fiction is like an ethics class for inventors, and engineers and designers should be trying to think like science fiction authors when they approach their own work.” Instead of being bound by business models and the constraints of modern day science, sci-fi writers simply imagine: What’s missing? What would be really awesome and useful if it were to exist in this situation? And then, sci-fi designers take that idea and make it look like something. As a designer and entrepreneur, I am always fascinated to see how exactly creative teams decide to portray future technologies.

I plan to return to this subject somewhat regularly going forward, so I won’t try to cover the whole sci-fi continuum in one post (ha). Today I’d like to talk about one example: the CBS TV show Person of Interest, 2011-present, starring Jim Caviezel and Michael Emerson.

The premise here is that after 9/11, the government commissioned a nerd genius to build “the machine“, a machine intelligence that watches and listens to the world from every camera, everywhere, 24/7. The government’s stated goal for the operation was to identify terrorist threats before they materialize, but the machine can’t just identify the terrorists; it surveils everyone, and then creates a list of “relevant” (terrorist) threats, and another list of “irrelevant” (everything else) threats. Based on phone calls, voice tones, behavior patterns, and a sprinkle of sci-fi magic, the machine spits out the social security number of the person or people who will somehow be involved in a violent crime. It’s very much along the lines of Tom Cruise’s Minority Report (2002) (TV adaptation coming in 2015!) except there are 90 episodes (and more to come), which allows for a pretty deep investigation of the ramifications of the existence of such a technology.

The main reason this representation of technology in this show is interesting to me is because, from wikipedia: “The series is from the point of view of The Machine, with flashbacks framed as The Machine reviews past tapes in real time. Over the course of the series, the internal workings of The Machine are shown, including the prediction models and probability trees it uses. In the Machine-generated perspective, individuals are marked by dashed boxes with different colors indicating, for example, what the person’s status is in relation to The Machine and whether they pose a threat.”

I am no surveillance expert nor do I have particularly radical views regarding the surveillance state. Part of me believes that it’s pretty much inevitable so why fight the tide, but that’s just negative Nancy blowing air in the back of the room. But instead of tackling all of the interesting commentary on the status and direction of surveillance in the USA today head on, I want to focus on how it looks in the TV show. What is the film editor / animator / creative team trying to tell us? What do the visual representations of this “Machine” communicate to the audience? How is design used to complement the drama on screen, and how does it contribute to the messages the show is trying to get across?

Let’s start off by giving you a taste of what we’re talking about. Here’s the show’s introduction explainer. Here’s a scene featuring Bear, an gorgeous trained Belgian Malinois that co-stars alongside the humans. And here are a bunch of screenshots of movietech from various episodes of Person of Interest:

Seeing through the eyes of the machine, everything becomes possible. All can be seen, all can be known. One character regularly refers to the machine as “God” and uses female pronouns as if it’s a live entity with thoughts and emotions: “The truth is, God is 11 years old, that she was born on New Year’s Day, 2002, in Manhattan”. This anthropomorphization may or may not be an accurate representation of how a real-life surveilling AI will behave, but this is TV; the goal here is not accuracy, but rather to thrill and entertain.

However, despite being told from the Machine’s POV, the show still very much abides by the classic Hollywood storyline structure. In the terms of Walter Ong, “CyberŽfilms are about electronic thinking but are couched in exclusively literary forms” in order to be close enough to the human “lifeworld” that we can understand and relate to it. Put another way, if an AI decided to make a TV show without considering its audience, I doubt it would cater to the slow and linear mind of the average human. But this is human-made TV; the goal here is not accuracy, but rather to thrill and entertain a human audience.

Let’s take one of those “zoomhance” scenes, where the protagonist takes a tiny bit of investigative data and manages to discern a deeper truth through techy gizmo magic. From a writer for popular TV show CSI on Reddit: “We write those scenes to be inaccurate and ridiculous on purpose. I’m a young writer in his mid-30’s, computer and game savvy. Lots of us are. I guess you could call it a competition of one-upping other shows to see who can get the best/worst “zoomhance” sequence on the air. Sometimes the exec producers and directors are in on it, and other times we just try to get bits and lines into scripts. 90% of our TV viewing audience will never know the difference.”

In Person of Interest, almost every scene involving a screen – computer, phone, surveillance footage – have one thing in common: one big, bold, flashing red-and-white announcement of what is happening in the script. *VIRUS UPLOADED* *HACKING BANK NETWORK* *CREDIT CARD INFORMATION ATTAINED* *VIOLENCE IMMINENT* *FORCE PAIR COMPLETE* etc. With utter disregard to the user experience of actually using the software in question, movietech designs for the big screen, which is hilarious but generally necessary. Rarely do we see interfaces that are familiar to the average tech user. Instead of the classic white-and-colorful Google landing page, we get greytoned terminals and scary looking level monitors like these found in Hydra technology from the Marvel Universe. As if to say, kids, don’t try this at home, because you couldn’t if you tried.

The point I’m trying to make is simply that technology in films, even when the “narrator” is a machine, serves primarily to reinforce and decorate whatever drama is happening on screen. The animations don’t need to be accurate representations of how an artificial intelligence would “think”, they just need to evoke awe and technological omnipotence. The shock and alarm the viewer feels when the machine effortlessly invades every home, every bluetooth coffeemaker, and every smartphone to listen in on your late night pillow talk — it’s all part of the show’s hook, and it’s reinforced by the graphics that all scream “I AM EVERYWHERE”.

Basically, all the editing and scrambled voice recordings and silly typewriter fonts used throughout the show are meant to never let the viewer forget exactly how ubiquitous the Machine — and today’s technology — is. It’s reinforcing an alarmist message about big data, who owns it, and how they will use it. In a world where Google knows more than the NSA and we’re OK with that, Person of Interest asks, What if small, private organizations take this power into their own hands? What if it’s not a non-interventionist AI machine that is watching our every move, but rather a small group of fallible humans?

The goal of the production design here is just to shove it all in our face, over and over again. It’s relentless and thrilling and looks cool and complicated. I’d love to hear the production designer break down how they achieve each effect. How much is proprietary? How much is the real deal, and how much is post-production CGI? They do such a good job that frankly it’s hard to tell.

Someday I’ll write about Mr. Robot and Black Mirror, both of which are fantastic shows that push the sci-fi envelope in many directions.

Until then, I, for one, may or may not welcome our machine overlords