Testing Strategically, Part Two


In the previous post about testing we discussed the importance of testing the output rather than the process wherever possible. And the importance of understanding the real performance you want to test, rather than simply counting something that is easy to count or track. This post will provide a model that can be used to guide thinking about where and how to test a complex performance, which is what most on-the-job tasks are.

Given that you want to test strategically, how can we think about capability structurally so we can discuss testing with others and come to a reasonable agreement? I think of it as where to put the thermometer. Whether you are cooking a turkey or smoking a brisket, temperature is important. But where you actually insert the thermometer will make a big difference in the temperature reading that you get. If you are testing a turkey and accidentally push the thermometer too far and go into the cavity, you will get a reading that does not match the temperature of the meat. Think of performance testing that way. If you could see the parts of the performance, you could determine where the thermometer needs to go to measure an appropriate range of things and which measurements would provide the best readings.

Below is a generic model of performance. Notice that there are a series of steps leading to the production of an output. These two components are the primary elements of the Performance. Supporting the performance are the knowledge, skills, and attributes the performer needs in order to execute the performance. At the farthest/lowest/most fundamental level are prerequisite capabilities. These are actually supporting capabilities as well but they are basic enough that you can decide to ignore them for the purposes of any training or testing. However, if the prerequisites are critical to performance, you may decide to assess people before they are selected to learn or execute the performance.

Hierarchy Diagram


Let’s look at an example.

Imagine that the performance is a call center agent. They receive a call and follow the steps in the call flow (for example, greet the customer, confirm the need, secure the account, determine a solution, etc.). Let’s say the output is an order for whatever the call center is selling.

For the order, the output, you could measure one or many of a number of parameters, depending on what is most relevant to the business.

  • Dollar value of the order
  • Volume of orders
  • Potential for additional sales (e.g., a customer order entirely new service vs an existing customer adding only a minor feature)
  • Perhaps whether it was a strategic sale, that is something that is important to the future of the business (e.g., a customer buying internet service instead of just telephone service or buying something with a higher profit margin)
  • Customer satisfaction, that is, you might value a sale that stays sold rather than one in which the customer calls back the following day to cancel because he or she was “pushed” into buying something
  • Technical accuracy, that is, all the necessary information was entered into the system correctly or that the order is compatible with what is allowed or available in the customer’s market

Those measures can all be taken without observing the agent’s performance because they focus on the output. If we started from the output, our next decision would be whether the results of the output measures sufficiently tell us what we needed to know. (Keep in mind that testing is done for a number of purposes, including performance management, compliance with rules/laws, or capability). If you were only testing to evaluate who should get paid how much sales incentive, you may not need all these measures.

But what if you were testing performance to verify capability to perform? In that case, you may want to consider additional testing on the process in order to evaluate the agent’s ability to interact effectively with the customer. For example, you may want to confirm they obtained the right information from the customer before discussing the account. Or you may need to confirm that they probed for additional needs or promoted the company effectively. You wouldn’t be able to assess that from the output. But, if the agent did all the steps but didn’t get the orders, their performance would be unsatisfactory. So, it appears you would need to test both output and process.

During training, additional testing may be needed. As individual skills are learned, for example using the ordering system to access customer accounts or configuring customer orders to work with their equipment or existing service packages. Even more fundamentally, you may want to verify that the agent can properly explain or answer customer questions about the various services and potential benefits.

Finally, there are the prerequisites. Continuing with our example, call center agents need the ability to talk while typing, that is, converse with the customer while entering information on the computer at the same time. Someone without pretty good keyboarding skills would not be able to keep up with the pace of a customer call. But, it would be expensive to hire employees and then train them to type until they are fast enough to do the job. A better hiring strategy is to hire people with the keyboarding skills already and, to make sure we don’t waste a lot of time and money, it would make sense to have an incoming assessment of keyboarding skills as part of the selection process prior to investing in training.



This structure provides guidance on decisions about how and where to perform testing. To test “above the line” (i.e., observing steps or evaluating an output) you have the option of performing on-the-job testing. In this situation, by defining the criteria and creating a simple evaluation tool, you can leverage supervisors and top performers to perform testing while also getting business done. And, the testing instrument can be used for both testing and coaching. Of course, in a training setting, you can also test “above the line” performance using simulations and exercises.

This same type of structure fits all kinds of jobs, from technicians, to sales people, to managers, and others. All jobs produce some type of output by performing some series of steps. (Actually, most jobs are responsible for several outputs grouped in various areas of performance.) Those steps require supporting knowledge and skills. We’ve observed that “higher level” jobs tend to require more supporting capabilities and the process (and sometimes even the output) is not defined as clearly or consistently as a “lower level” job. Lower level jobs tend to be procedure-oriented so that the performer has a longer series of defined steps to perform but fewer supporting capabilities, due to the availability of additional tools and reference materials. But the structure fits and can be used to determine a testing strategy (as well as a development or training approach) for any role or process.

One important part of managing capability is measuring the (performance and supporting) capabilities needed for key roles/jobs and then building a “supply chain” to deliver those capabilities to the workplace through a combination of selection, training, and testing/verification. Using the model shown above can help improve consistency in approach and results across an organization.

For more information on testing or capabilities, explore PRH Consulting resources or check out ISPI’s book Handbook of Improving Performance in the Workplace: Volumes 1-3,  ISBN: 978-0-470-19067-8. Check out Book 3, Chapter 12 Testing Strategies: Verifying Capability to Perform. 

Leave a Reply