Fluffy or Fierce?

This past week I attended a presentation by Bryan Orlander about Organizational Assessment, hosted by the Indiana Evaluation Association. Bryan had lots of practical wisdom to impart during his talk, but one of the points that he raised that really hit home for me was that fact that some organizations view evaluation as  friendly and helpful activity which they fully embrace, whereas others view it with great fear and are therefore more reserved or even resistant to evaluative initiatives.

One of the challenges we sometimes face as evaluators is working with clients that don’t fully embrace the evaluative process as a means to foster improvement rather than a mere proclamation of a program’s successes and failures. While it is a pleasure to work with clients who fully understand and value evaluation from the start, it has also been one of the highlights of my career as an evaluator to help others come to see the benefits of evaluation…and to subsequently help their organizations move from evaluation-loathing, to evaluation-loving! I think all evaluators should ask themselves: Do your clients see evaluation as a fierce enemy to be avoided or a fluffy friend to be embraced?

IMG_0978

Sew what?

rea_cross

Nothing like a little cross stitch to get the creative juices flowing! So first, a quick note about the image above. I accidentally hit the Pinterest button on the top of my browser window while I was reading my email this morning. Because there were no other images in the window that could be pinned (apparently), I got a generic cross stitch image with my email address and company name.  It was cute, so I thought I’d share it – but then I got to thinking, since I do quite a bit of cross stitching, that there are quite a few similarities between cross stitch and evaluation.

1. It helps to lay some groundwork for success upfront.  You’ve got to count squares on your canvas before you start sewing to make sure you aren’t going to run out of space. Eyeballing it gets you into trouble. I’ve learned this the hard way on a few cross stitch projects where I was trying to use up scraps of fabric and was certain I’d have enough room to fit everything in with an even border around all sides – and the finished product ended up being a bit un-centered. The same thing goes for evaluation: eyeballing a project can get you into trouble.  It’s worth it to take the time to get to know a project – the key players, the goals, and all the intended activities.  A logic model is a great way during the planning stages or early-on in a project to make sure all the pieces are going to fit in a purposeful fashion.

2. There are many ways to stitch.  Over the years I’ve tried a lot of different methods…individual x’s, completing a whole row of lower left to upper right hand-corner stitching (////) before completing the same row with stitches in the opposite direction (i.e., \\\\), adding long stitches across multiple rows and then going back to add the other side of each individual square’s x (though I think this is cheating).  When all’s said and done, the finished product often looks the same from the front, but its a whole different story from the back. I think there are three evaluative lessons to be learned from this example.

A) Sometimes it matters how the back-side looks. With some evaluation projects, clients are looking for quick answers to questions and they don’t much matter how the questions get answered so long as they get accurate feedback. In other cases, clients want more transparency and may appreciate opportunities to chime in about the process the evaluation team takes in going about seeking answers to the questions set forth for the project. Sometimes, the “how” of it matters just as much to a client as the finished product, and that’s okay.

B) Different methods may be better in different contexts. While I don’t have scientific proof of this claim, I would assert that some modes of stitching are quicker in certain contexts (e.g., big chunks of color vs. patterns that are more diffuse or complex). As evaluators, it’s important to consider the methods we are using and their fit with the specific data-collection context. Sometimes we need to think strategically about which evaluative method is most appropriate given a wide range of factors or constraints (e.g., time, cost, privacy issues, literacy or language barriers, cultural differences among participants, institutional barriers, and more). One of the important skills that we develop as evaluators is being able to identify and implement the methods that are best-suited to each individual situation.

C) Don’t get in a rut.  I often change things up in the middle of a sewing project just to break up the monotony of stitching the same way over and over again. At the onset of an evaluation project, there is definitely value to thinking outside of the box (i.e., the same set of evaluative methods that we rely on day in and day out), but there may also be value to adding in some variety within an evaluation project to keep things fresh, and to help ensure that we are seeing things from different viewpoints.

3) It’s okay not to be perfect. I’d be the first to admit that my cross stitch projects are rarely perfect. I miscount rows or misplace x’s all the time. Rarely is a mistake so major that I have to unstitch things to make a correction, so most times I just roll with it and make modifications to the pattern as necessary. Ultimately, I’ve come to think of those minor diversions from the pattern as my special fingerprint on a project. It’s a lot harder admitting that my evaluation projects are rarely perfect, but after a considerable amount of reflection, I think it’s fair to say that there is no such thing as a perfect project nor a perfect evaluation. Rarely does a five-year project adhere perfectly to every single detail of the plan set forth in the proposal – so the evaluation has to pivot and evolve. Even with great instruments, the resulting data can be messy. The true sign of a good evaluation, not unlike cross stitch, is the quality of the final product.

As evaluators, we are often charged with asking the question, “so what?” but next time (if you are so inclined) think of this post, and ask “sew what?” –  you might just find some creative inspiration to make your evaluation effort even more awesome.

Below: a few of my recent cross stitch projects

Tips for Play-Testing

The following article by Anna Jordan-Douglass – one of the great folks at The Jim Henson Company that I’ve had a chance to do play-testing for over the past few years – shares a wealth of great tips for ensuring games are fun and functional by testing them throughout the development process.

The Tests the Thing: Making Sure Your Product is Fun and Playable (In Games + Learning: )

http://www.gamesandlearning.org/2014/04/28/the-tests-the-thing-making-sure-your-product-is-fun-and-playable/

Among Anna’s tips: Keep it simple – play testing sessions don’t have to be too long, or too formal, or include too many participants; you can find important things letting a few kids spend a few minutes playing a game somewhat informally.  Sometimes, its not what kids say, but rather what they do, that tells you what they really think about a game – if they don’t want to quit playing, its a good sign that you are on the right path to a fun game that other kids will enjoy as well.

As I’ve been cleaning through some files and reviewing reports for the “Where Fun and Learning Clicks!” project that I worked on toward the earlier part of this past decade, I’ve been reflecting on some of the other things I’ve learned over the course of a decade’s worth of play-testing apps and websites with youth. Where Fun and Learning Clicks! was a program led by the Corporation for Public Broadcasting to fund development and test the impact of five websites geared toward tweens. I hope to share a more formal recap about my efforts on that project at a later point, but in the mean-time, here are a few play-testing tips that I would throw into the mix:

1) Not every kid will like every game – and that’s okay! There are many different types of successful game genres for adults, and there’s a reason for that – not all adults like playing the same kinds of games, and the same goes for children. Some games might have greater general appeal, but it is reasonable to assume that there will be some variation in how much any game appeals to different children.

2) Never underestimate a child’s ability to innovate. Kids are incredible innovators when it comes to games.  Given time and free reign to do so, they can take an ordinary game to the next level by creating new ways of playing or competing with other players. Given two weeks of free play time, I once saw a group of kids transform a relatively simple flip-book style game wherein different outfits could be created, into an awesome variation on a collection game wherein they were competitively comparing the collections of different outfits they’d amassed.

3) To get good data, its important to make sure that a child is at ease and feels comfortable in the environment where the testing is done. We’ve had great success doing testing in homes and, when home testing is not an option, typically prefer to run testing sessions in community centers or libraries that children are familiar with, but when it is too tricky to work out the logistics associated with testing in multiple sites, or setting up sessions in a remote site, we do everything we can to make our play-testing room as kid-friendly as possible, with a low table, bright posters, and a few strategically placed stuffed animals that help to make the room more comfortable and inviting to young children.

4) What kids do is sometimes more important than what they say.  Sometimes a kid will tell you that he loves a game, but every other minute he might be looking or asking for other games he can play.  Another child may not be able to say much about what she likes about a game, but the fact that she doesn’t want to stop playing (as Anna mentions in her article), actually says quite a bit in and of itself.  There’s a great deal that can also be gleaned by watching for body language cues during play testing sessions – i.e., watching for things like signs of determination, distraction, or joy.

5) Because there’s so much value in being able to see what kids are doing – recorded video can be a super-valuable tool for facilitating more extensive review and analysis of play-testing sessions and/or helping clients see strengths and weaknesses of their games. When testing computer games, over the shoulder video (esp. coupled with screen capture + webcam captured video) is great – but things get a little trickier when kids are testing games or apps on handheld devices. When taping play-testing sessions on handheld devices, we’ve found that recording with a handheld iPod or digital video camera is a simple way to keep all the action in-frame, even as a child moves around.

The Dino Does It! Advantages of Creative Survey Incentives

There is a certain art to figuring out the right incentive to drive participation for any kind of evaluation activity. Do you offer something small to everyone who participates, or a chance to win one or a small number of more valuable incentives?  Offer too much and you might blow your budget; offer too little and you may not get enough participants. Further complicating matters is the fact that no pool of potential focus group participants or survey respondents is identical – what might work well for a group of high-school students in California may not cut the mustard with a group of informal science educators in St. Louis.  Not surprisingly, the true value of any incentive is in the eye of the beholder – and with that in mind I wanted to share a story about a very successful survey campaign that incentivized adult respondents with merely a chance to win a toy dinosaur.

As part of our evaluation effort for Radiolab <http://www.radiolab.org/>  (the popular science podcast and radio show produced by WNYC), we created a survey to gather feedback from people who attended their live show, Radiolab Live: Apocalyptical, in venues throughout the United States. For lack of better and more scientific terms, the audience for Radiolab and its live show tend to be young, hip, and very creative — they have done everything from writing songs, making videos, and creating new logos, to helping name our mammalian ancestor. So, instead of offering a chance to win an iPod loaded with Radiolab podcasts — which, as a high-value prize, presented certain legal hurdles — we ultimately decided to scrap the idea of a prize with high monetary value, and return to the drawing board with the added challenge of finding an original incentive that had no real monetary value, but would still be seen as valuable by potential respondents.

It was Molly Webster, a member of Radiolab’s production team who stumbled upon the genius idea to offer a toy dinosaur, signed by Jad Abumrad and Robert Krulwich, the show’s two hosts, as the incentive.  In her own words:

I have one sitting on the shelf to the right of my desk, staring at some fossils I collected in the field while reporting the story, so when I dropped my head to the side in a sign of frustrated desperation, trying to think of what we could give away, my eyes settled on it. And ahoy! It became our prize.

What made the dino such a great incentive? First off, dinosaurs played prominently in the Apocalyptical program, so it was a prize that was likely to resonate with individuals who’d seen the show. Secondly, the dinosaur had a very minimal cash value, meeting the low-value preference. Thirdly, since it was small and unbreakable, it would be easy to ship to a winner anywhere in the US. But finally – and most importantly – I suspect it was the show hosts’ signatures that elevated it from a cool, albeit quirky, prize to something that Radiolab fans really wanted to win.

As an evaluator, I’m usually thrilled when I get a couple hundred survey responses for a program of this nature – so I was truly amazed when we got more than 6000 survey responses!  Did the dino make a difference?  Obviously we can’t know if the dino was a more popular incentive choice than an iPod would have been, but based on past experiences where we’ve offered a more conventional incentive and gotten far fewer responses, I feel that the creativity and uniqueness of the incentive might have been an advantage in this instance. It also didn’t hurt that Radiolab listeners, i.e. the folks who made up the majority of the Apocalyptical audience, are really passionate about the show and likely valued the opportunity to provide feedback to help make something they love even better. None-the-less, 81% of respondents opted in for a chance to win the dino – and one lucky winner received a plastic dinosaur signed by Jad and Robert in her mailbox.

My take-aways from this experience: 1) Don’t be afraid to think outside the box where incentives are concerned.  2) Keep your audience in mind and try to come up with incentives that they will perceive to be valuable – even if there’s little or no monetary value. 3) Involve clients in the incentive-brainstorming process – they know their programs and audiences better than anyone and might have a brilliant incentive idea that could drive thousands of responses for your next survey.Image