Update: Updating Evaluations Based on “Performance-Focused Smile Sheets”

My last post was about updating evaluations based on Will Thalheimer’s book, Performance-Focused Smile Sheets. In a word, we failed. But as learning professionals, failure = learning. We learned that we had a really hard time reporting on the results, we weren’t giving our stakeholders the information they wanted, and learners were still getting a bit of “evaluation fatigue” that resulted in couple instances of wonky results. That said, the practice of remodeling the evaluation questions was extraordinarily valuable in creating our new evaluation questions.

Don’t be Misled

Before I do a deep dive into the challenges we faced, I need to start with saying that you should absolutely read and take Dr. Thalheimer’s book to heart and try out the new evaluation questions. You will gain a new perspective on evaluations and the result – even if they are not the ones in his book – will be a more thoughtful one. I found that it was necessary to try out his questions and test them within our culture. It still led to a more thoughtful and purposeful evaluation at the end.

Challenge 1: Reporting the Results

My organization’s culture is big on data and reports. The ability to efficiently and effectively report the results was very important. We had difficulty doing so on the questions that did not resemble a likert scale. For example, we had the following for instructors:

Which of the following were true about your course instructor? Select all that apply.

  • Was often unclear or disorganized.
  • Was often unprofessional or inappropriate.
  • Exhibited unacceptable lack of knowledge.
  • Generally performed competently as a trainer.
  • Showed subject-matter knowledge.
  • Motivated me to engage deeply in the learning.
  • Is a person I came to trust.

As a “select all that apply” question, we had difficulty reporting this out efficiently for two reasons:

  1. The survey tool’s reporting capabilities are relatively limited for that type of question. It required a bit more of a manual lift, resources we do not have.
  2. The culture likes responses to add up to 100%, otherwise it felt misleading. For example, we can say that “80% of participants said that the facilitator motivated me to engage deeply in the learning;” however, we cannot explicitly say “but 20% said that the facilitator did not motivate me.” Yes – that is the logical jump, but we like explicit statements here.

Challenge 2: We Weren’t Giving our Stakeholders the Information They Wanted

We learned our stakeholders wanted two pieces of information:

  1. Can participants do their job?
  2. How well did the facilitator do?

We only had one or two questions that got to the first question, though there are more in the book that we were ultimately inspired by. For the second desire, we had a question that was difficult to report on. The rest of the questions (e.g. engagement, motivation, etc.) are helpful to us as trainers, but not to stakeholders.

So we actually decided to remove any questions that did not have to do with one of the two stakeholder desires. If a learner negatively responds to either of those two types of questions (they are unable to do their job, and/or the facilitator did not do well), that will be a trigger to start more research into why. Ultimately, that research would uncover whether motivation and engagement are any part of the problem and get to the same solution.

Challenge 3: Evaluation Fatigue

Sadly, though we reduced the questions down to 7, we still discovered some evaluation fatigue. The answers required a good amount of reading, and there were a few misreads that resulted in some learners answering contradicting answers to the “select many” questions. We reduced the wording in new questions while keeping the number of questions about the same.

Iterative Solution 1: Give the Evaluation Questions at the Beginning of the Training

We intend to add the evaluation questions in the beginning of the participant guide so learners know what they will be answering at the end of the training and can take notes. We hope this will reduce evaluation fatigue. Will report back!

Iterative Solution 2: New Questions

We have two “buckets” of questions:

  1. Training effectiveness (really – “can they do their job”)
  2. Facilitator effectiveness

Training Effectiveness

Please indicate your level of ability to execute the following job tasks covered in this course. Choose one of the following options for each task:

  • I am able to complete this task independently and competently with confidence.
  • I am able to complete this task most of the time, but will require some hands-on experience to build confidence.
  • I will need some assistance completing this task.
  • I am not able to complete this task at all at this time. Additional guidance is needed.

Each task/learning objective is laid out with the responses above. They are still being edited, but that’s the gist of them.

Facilitator Effectiveness

The following facilitator questions will be in Likert form with the following possible responses:

  • Strongly agree
  • Mostly agree
  • Mostly disagree
  • Strongly agree
  1. The facilitator demonstrated knowledge of the course content and was able to answer participant questions.
  2. The facilitator was organized and prepared for class.
  3. The facilitator led successful group discussions and engaged participants throughout the course.

I’m not crazy about the Likert, but maybe that I will tackle that another day.

Next Step: Taking the Evaluations to the Stakeholders

Let’s see if they like them! Once approved, they will begin to circulate in December and I will be able to report back on how they go.

Personal Learning

I learned a few things in this process at a personal level:

  • Collaboration is necessary. We had an hour long meeting to address the stakeholder issues, that turned into
  • Finding a happy medium is sometimes the best solution all around. It may not be the most thorough or data-driving solution, but will at least provide triggers to do more research as needed.
  • It is important to meet your stakeholder’s needs and your own needs in the evaluations without overexerting the learner. The stakeholder needs were prioritized because they were still good indicators of the quality of training. The new evaluation questions can trigger the need to do our own research and observations.

Conclusion

The book is invaluable and you should absolutely get it. Changing the questions to his recommendations would work in many cases. It also sparked the change in our otherwise generic evaluations. The book encouraged us to be more thoughtful in what we provide for evaluations. We have now hit a bunch of extremes in evaluations. In the past couple of years, we have gone from the basic smile-sheet, to a multi-page questionnaire, to the questions in the previous post – and now a new set of questions. The important thing is that we are learning and trying again and again.


%d bloggers like this: