Musings on Digital Badging Programs (Part 2)

The following is a summary of my findings from evaluations of the Smithsonian Quest program over the course of the past three years and general musings over the course of the past few days as I’ve read and reflected more extensively on assessment of digital badging programs.

What I’ve learned about digital learning programs

CHARACTERISTICS OF SUCCESSFUL PROGRAMS

Provide specific and supportive feedback – While its not always possible for feedback to be spontaneous, it is important that it come fairly quickly so that learners don’t lose interest or momentum in their learning experience. Furthermore, in Smithsonian Quests, we’ve found that getting real feedback from real educators is inspiring for students.

Facilitate personal connections with the learning experience – In Smithsonian Quests participants are encouraged to connect to examples from their own interests or experiences, their own family or culture, and/or their own community – This helps to make learning personal and that, in turn helps create greater connection to, and engagement with, the learning experience.

Offer flexible learning experiences – Successful programs are flexible:

…In terms of skill & ability levels – i.e., meeting learners where they are, in terms of their academic abilities, motivation levels, and access to various technologies

…In terms of when & where the learning takes place –  Flexible learning requires a flexible platform that allows participants to work at their own pace, in their own space/time – and save and return to their work at a later time.

…In terms of tasks and choices – i..e, those that offer many roads to success, not just one set path. Programs that offer choices between and within quests/activities are likely to appeal to and be completable by the widest variety of learners.

 

WHAT MOTIVATES LEARNERS

  • Being connected to students and educators (especially those associated with the Smithsonian or other well-known and highly respected institutions) from all over the world!
  • Getting personalized feedback from real people – i.e., people paying attention to what they say and do – and appreciating their work and insights
  • Being able to showcase their work to their peers – when something is going to be seen by the public, students tend to bring their “A-Game.”
  • Elements that encourage friendly competition among participants can be motivational as well.

LEARNERS AREN’T ALL THE SAME 

Learners aren’t all the same, so neither should their learning experiences be. As such, there is no “one-size-fits-all” digital learning program. Rather there are several learner-variables that successful digital learning programs can consider:

1. Levels of intrinsic motivation and curiosity

2. Personal interests

3. Skills and ability

4. Access and prior exposure to content/resources

5. Local supports (parents, teachers, others)

6. Access to tech and ability to use it

1 & 2 are more learner-centric or learner-specific variables. 3-6 are linked to learners, but they are also linked to the learning context – this can vary drastically for learners in underserved environments.  We can address 3-6 in formal educational settings – but as educators, we need to think more about how we can influence 1 & 2 in meaningful ways?

CHALLENGES FOR ASSESSMENT

Great learning experiences don’t happen in a vacuum Learning context matters, and as evaluators, we have to consider things like fidelity of implementation and the level of difference or consistency in different learning experiences.  We saw some evidence of this in last year’s evaluation of the badging program – there was a group of students who submitting incomplete work and it was clear to the advisors that they were receiving less in-class/offline guidance, support and feedback than other participants were.  This is also another good reminder that having great online facilitation doesn’t make up for the need to have good on-site/in-class facilitation as well.

Identifying and reaching participants Even though there is a wealth of data that we can access online, there seem more participants than we know about and therefore have the ability to reach and/or assess. In some cases, teachers are taking activities from Smithsonian Quests and doing them off-line with their students – we don’t get to see the products of those efforts, or have any way of knowing how many people are participating in that way. In short, there are definitely limitations to what we can see and therefore assess.

More on the facilitator experience

Effective training and time to explore help to prepare facilitators  The people facilitating and providing feedback to participants must understand the system/process of participating before they can effectively engage with participants and offer meaningful support – in short, to be a good evaluator/educator, its helps to have had experiences participating as a learner. 

Enhance assessment practices and resources.  Advisors suggest the value of having benchmarks and a library of examples of exemplary submissions. Additionally, they find value in a wide range of examples that span grade and ability levels to help contextualize what constitutes exemplary, average, and/or sub-par work.

Make sure there are supports in place for facilitators It helps to have someone (or a team of someones) pick up the slack if things get busy. A system or schedule of assigning review tasks helps to equitably balance the workload. The ability for real people to provide feedback (in real time) takes real time and ultimately impacts scalability.

Find ways to encourage or implement quality-control checks at the classroom level – prior to quest submission. Advisors in the Smithsonian Quest program suggested that more preliminary oversight of submissions (i.e., perhaps from classroom teachers) be encouraged to help ensure quality and cut down on incomplete submissions or unaltered re-submissions. More oversight at the classroom level can reduce the workload and need for online facilitator intervention and support – and will likely have a positive effect on the overall learner experience as well.

 

See More!

Interested in this topic? You can view an hour-long conversation hosted by the Smithsonian with Kate Haley Goldman (from the National Center for Interactive Learning), Robert Stein (from The Dallas Museum of Art) and me here:

Digital Directions in Learning: Assessing Progress–Badges as a New Tool

You can also learn about and watch other installments in the “Digital Directions in Learning” conference series here: http://smithsonianeducationconferences.org/digitaldirections/

Advertisements

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s