The Data Revolution

Screen Shot 2016-08-02 at 9.47.56 AM.png

This year’s theme for the Visitor Studies Conference–playing on the historic revolutions that took place in Boston, where this year’s conference was held–was “The Data Revolution.”  There were an impressive number of presentations and workshops that focused on data and a variety of approaches to data analysis.  As always, I’m happy to share out my conference notes, as well as link to my presentation (along with co-presenters Claire Quimby and Elee Wood) “How to Keep from Drowning in Data.”  Lastly, I hosted a dining discussion on the topic of “Cool New Tools, Tips, and Tricks for Evaluation.”

Below (an example from my notes): a scattergram that plots several data points from different exhibits based on Sweep Rate Index (SRI) and the Percentage of Diligent Visitors (%DV)

scatter.jpg

Advertisements

Less is the New “More”

Big is out – and tiny is in!

url.jpg

If you haven’t been hiding under a rock recently, there’s a good chance that you’ve heard of the tiny house movement. This trend toward small and ultra-small houses has been gaining momentum worldwide as more and more people have been seeking simple and affordable housing options.

One of the underlying reasons is that people have embraced the notion that having more stuff doesn’t necessarily make people happier.  Perhaps the same can be said of reports.

Over the course of my career as an evaluator, I’ve written many reports that have exceeded 100 pages – but rather than being the useful resources that we hope they will be for our clients, they often become glorified bookends. Maybe less is more where reports are concerned as well.

Can we go from this:

large-report-425x235.jpg

to this:

tumblr_mn600td2CR1r5vojso3_500.jpg

???

Okay, not literally tiny reports.  But seriously, I’ve been sensing a growing interest in and demand for tiny reports – i.e., reports that total less than 20 pages and take less than a half hour to read – Reports that give readers a good sense of the key findings without inundating them in every detail of an extensive study – Reports that are far more likely to be read, shared, and referenced than their lengthier counterparts.

I’m not sure that all clients/evaluation stakeholders are ready to embrace the notion that less is more – and that might be the first challenge: i.e., getting the primary audience to see how a shorter report might be worth a great deal more than a much longer/more comprehensive report.

An equally daunting challenge–once you’ve gained buy-in for the idea of producing a tiny report–is the fact that its not always easier to write less.  Presenting information in the most succinct/elegant way possible can actually be more time consuming and mentally taxing than mindlessly spewing out every single finding. Despite these challenges, however, I’d argue that the resulting use of a tiny report should be the driving factor in advocating for, and adopting, the move toward shorter reports in contrast to longer more burdensome reporting formats. After all, a great report is only truly great if it gets read and used – no matter its size.

 

Updated 5/31/17:

I loved this related post by Kylie Hutchinson on the AEA 365 blog – “The Demise of the Lengthy Report.” I especially appreciated the hamburger analogy and image – and it serves as a great reminder that its always important to check stakeholders’ appetites where reporting is concerned.

 

It depends…

“It depends” is a phrase that most evaluators know well.  In the work that we do, context and participant characteristics often matter a great deal.  In this recent EdSurge article by Patricia Gomes she explores concepts and caveats that relate to efficacy as discussed by Barbara Means and Jeremy Roschelle, co-directors of SRI’s Center for Technology in Learning:

https://www.edsurge.com/news/2016-03-23-five-things-about-efficacy-that-should-be-intuitive-but-are-not

Big takeaways:

  1. Tools don’t exist in isolation.  How they are used by educators matters a great deal. A great tool being implemented half-heartedly or incorrectly may not produce desired results.  Conversely, a poor tool being implemented in the right context by a skilled educator could produce results that wouldn’t be seen in other contexts.
  2. Educators often seek a one-sized-fits all approach, but like cars, maybe the best product varies based on the needs and preferences of different users.
  3. Measures matter too.  You’ve got to know what you are looking for and pick measures that can effectively identify desired outcomes.  (I would also add a note about the fact that it is also good to incorporate methodology that also allows you to uncover unintended outcomes – if we are too focused on very specific outcomes, we may miss other unanticipated outcomes–both positive or negative–that could also be important to understand).
  4. Some outcomes may take time to emerge. Big and fundamental changes in learners take time.
  5. For all these reasons: “It depends” is a valid and accurate response for people who wonder if certain ed tech tools work.

Afterschool Evaluation – ideas and inspiration from NAA2016

I recently attended the National Afterschool Association’s Convention in Orlando.  My notes from the convention can be downloaded here: http://cl.ly/050v3s0G333x

I learned a great deal from each of the presentations that I attended, but here were the biggest takeaways from an evaluative perspective:

  1. How we frame and phrase our findings is important. There’s been fantastic work done by the FrameWorks Institute (frameworksinstitute.org) on how to more proactively frame messages about the need for STEM programming and outcomes of STEM programming – so as not to trigger counter-productive assumptions or biases.
  2. What we look at matters. One of the presenters stated that children who come from homes where they are getting ample love come to school ready to learn, while children who’s families are constantly under stress or facing challenges come to school needing to be loved. The comment implies that the later group can’t effectively learn until their former need is met.  As evaluators, if we are only looking at learning outcomes, we may be missing the very important things that are happening to help lay the ground work for subsequent learning.
  3. Stakeholders are more likely to embrace evaluation if they see the value in the data being collected. Being asked to take time to collect survey data can be frustrating if program staff don’t see the immediate value in how that data can be used to benefit or improve their program.

NAA_recap

Evaluation on the go

While traveling to Orlando recently, I stumbled across these two feedback stations at the airport – one in the intra-terminal tram waiting area and the other in the women’s restroom.  As an evaluator, the thought that instantly jumped into my head (aside from, “quick, snap a photo!”) was how they planned to make sense of any data they gathered.

The feedback station in the bathroom made sense, but it occurred to me that feedback about the tram was likely to skew toward the negative, simply because people who’d have the greatest opportunity to provide feedback would be those left waiting for greater periods of time. Conversely, if a tram was already at the station or just pulling in, the odds that someone would see and/or stop to respond to the question would presumably be much lower.  This was a great reminder that the context in which we ask evaluation questions can have a great bearing on the outcomes.

 

Celebrating Julia’s Birthday

We took advantage of Julia’s newfound interests in turtles (in addition to her long-held affinity for penguins) to go with a turtle-themed decorative motif, but you’ll see in the background that the penguin who joined her for her birthday celebration last year also got in the spirit with a party hat.  Happy Birthday Julia – hope it was a great day and a great weekend (despite a rainy start)!