Learning program managers often rely exclusively on learner surveys to gather data about the program’s success. Is this a good strategy?

The answer is clearly NO. While learners are important stakeholders, there are several others including program managers, the design and implementation team, instructors and tutors, funders, and interested observers (e.g. other departments within an organization that may be offering similar programs), each with specific needs, perspectives, and informational resources to share. A complete, balanced evaluation of a program requires participation by all of the above.

Concerning the method of gathering learner feedback, “smile sheets*” – forms with check-off boxes and spaces for short answers – aren’t enough. In many cases, learners who were otherwise intensely engaged in a learning program race through end-of-course questions – checking off Likert Scale ranking boxes with no more than a second or two of thought and jotting single word answers to written questions (even when given lots of time to write more) – leaving the evaluator with next to nothing to work with. If feedback sheets must be used, they should be accompanied by other forms of feedback, such as group discussion, selective one-on-one interviews, or small focus groups.

Even if feedback is solicited from learners using a wide range of techniques, the main disadvantages of relying ONLY on learner feedback to evaluate a program or course are: excessive focus on one set of needs and perspectives (the learner’s); the absence of commentary on policy, resource, process and other issues that are “invisible” to learners; and reduced likelihood of buy-in from decision-makers and other stakeholders for conclusions and recommendations based on feedback from only one source.

*So called because learners tend to give high ratings and write comments like “Good”, “Great”, or “Loved it” – verbal “smiles”.