Shareable Measures in ISE

This past week, CAISE (the Center for Advancement in Informal Science Education) hosted an online conversation on “Shareable Measures in ISE Evaluation” – i.e., instruments and items used to measure outcomes in informal science education program that can be shared from project to project and institution to institution – or as Kirsten Ellenbogen more eloquently stated: “Shareable measures is the concept of using the same evaluation or research measurements to make more effective connections across the many and diverse activities of informal STEM education (ISE).”

The week-long discussion yielded a great deal of info on the perceived need and desire for resources that can be shared and used more widely as well as the unique challenges and potential obstacles. It also explored distinctions between evaluation and research, and between assessments and measures (or instruments, or methods).

In terms of the need – and underlying reasons for promoting shareable measures, Joe Heimlich suggested that “sharing measures opens up tremendous opportunity for critically considering big questions around institutional value in the community, contributions across informal science institutions in an individual’s learningscape, impacts of these institutions and programs on communities over time, and other vitally important questions.” On a more practical note, Kevin Crowley suggested the utilitarian value of being able to share measures (and definite applicability to the field of evaluation): “We don’t need to start from scratch each time if we have been tinkering with and perfecting measures over multiple studies. Sharing these is efficient and strengthens each individual evaluation.” And Gil Noam advocated for a widespread “culture of sharing” which would more effectively allow everyone to learn from one another.

However, despite some consensus on the potential value of shareable measures, there were also concerns and caveats voice by participants.  Barbara Flagg addressed the inherent challenges in creating measures that can be used more broadly:  “I totally recognize how difficult it is to develop short, unbiased, sensitive, reliable and valid measures, much less ones that can be used across informal settings.”  Joe Hemlich pointed out a key challenge in incorporating shared measures, which tend to be somewhat general/generic (so as to facilitate their sharability) – but asserts that “attitudes are most meaningful (and have the greatest tie to outcomes and predictive behaviors) at the very specific level.”  And Rena Delph addressed an obstacle that might keep people from wanting to share measures in the first place: “Many are fearful that shared measures will be used inappropriately AND for “evil” rather than “good” – that “evil” comparisons that may be used to show inadequacies.

Shared Resources

Ultimately, participants shared a variety of resources that are ready – or near ready – to be shared, and this post is designed to be a round-up of those items.

PEAR’s ATIS site (shared by Sue Allen) – A clearinghouse for many measures:  including the Common Instrument on Engagement in STEM – also Dos Observation Tool for STEM In OST (shared by Gil Noam)

Environmental Measures (Shared by Barbara Flagg) –

Compendium of STEM student instruments (Shared by Barbara Flagg)  – of STEM instruments Part 2_11-20-12.pdf

AAAS test bank for content assessment (Shared by Barbara Flagg) –

Instruments for assessing interest in STEM content:

  1. STEM Semantics Survey (attitudes)
  2. Career Interest Questionnaire (attitudes toward STEM careers)

These items above were shared by Karen Peterman and are available in the following publication (see citations for specific instrument authors): Tyler-Wood, T., Knezek, G., & Christensen, R. (2010). Instruments for assessing interest in STEM content and careers. Journal of Technology and Teacher Education, 18(2), 345-368.

STEM Interest and Engagement – The Synergies Project (Shared by John Faulk – pending publication in the International Journal of Science Education)

DEVISE (Developing, Validating, and Implementing Situated Evaluations) Ultimately developed shared scales rather than measures – shared by Tina Phillips + PD Opportunities re: evaluation in general:

Measures for Science Learning Activation from the Science Learning Activation Lab – (Posted by Rena Dorph) The Science Learning Activation Lab is a multi-institutional partnership including the Lawrence Hall of Science at UC Berkeley, The Learing Research Development Center at the University of Pittsburgh and SRI – (Background info: This measure includes four dimensions:

  1. Fascination with natural and physical phenomenon (emotional and cognitive attachment/obsession with science topics and tasks);
  2. Values science (understands various interactions of self with science knowledge and skills and places value on those interactions within their social context);
  3. Competency beliefs about self in science (perceives one’s self as capable of successfully engaging in science activities and practices);
  4. Scientific sensemaking (engages with science-related content as a sensemaking activity using methods generally aligned with the practices of science).

And three potential types of success:

  1. Choosing to participate in science learning opportunities
  2. Experiencing positive engagement (affective, behavioral, and cognitive) during science learning experiences.
  3. Meeting science learning goals during these experiences.

Timing and Tracking – methodological protocols and resources for comparing findings across studies (shared by Beverly Serrell):  Paying Attention: Visitors and Museum Exhibitions, American Association of Museums, 1998, by Beverly Serrell and Paying More Attention to Paying Attention at

Family Inquiry Behaviors (Coding Scheme) – From the Exploratorium’s GIVE Project, aka Juicy Question  – info posted by Sue Allen:

COVES (Funded by IMLS to create shareable measures for Science Centers)

EvalFest (Item-by-Item opt-in instrument for Science Festivals – created on QuickTap, will store all data in one database – posted by Karen Peterman)

My take-aways thus far

There are plans to continue the conversation this week, but since the chatter seems to have died down a bit, I wanted to insert my big take-aways thus far:

It comes as no surprise to anyone who’s done research or evaluation in informal learning settings that context matters a considerable amount. Sharable measures have the added challenge of controlling for contextual impacts across multiple contexts.

There are many shareable resources that have been created for a variety of different uses within ISE – but there doesn’t seem to be one central location where they are accessible.

To effectively share measures, trust is necessary.  Trust on the part of the person sharing that the resource won’t be misused, and trust on the part of the person implementing it that it is methodologically sound.

There are benefits and challenges to sharing measures – but ultimately those benefits and challenges may be different on the giving and receiving ends of the equation. In other words, the needs and goals of the person sharing a measure may differ from those of the person seeking to use/implement that measure. To make shareable measures more widespread, perhaps we need to work toward establishing ways to ensure that the objectives of both parties are consistently being met.


Leave a Reply

Fill in your details below or click an icon to log in: Logo

You are commenting using your account. Log Out /  Change )

Google+ photo

You are commenting using your Google+ account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )


Connecting to %s