Forget About the Learning Curve, Learn About the Forgetting Curve

The Learning Curve

It is (or should be) well understood in the training business that trying a new technique will result in an initial drop in performance. Coaches often need to encourage learners to continue using the new technique until they get up the learning curve. In other words, keep chugging and eventually your performance will exceed your previous levels. The learning curve can be unpleasant. At the least, it is a necessary evil. Nobody enjoys the learning curve but it is a reality for anyone trying something new.

The implication though, is that once you make the trip up the learning curve, you stay at the top — that once you know something, you know it permanently. But practically speaking, we know better…we know that it is really more “use it or lose it.” Researchers who have studied learning retention can confirm that the shelf life of new learning isn’t very long. As far back as 1885, a German psychologist named Hermann Ebbinghaus conducted research on memory. His focus was strictly recall—he had subjects learn lists of nonsense syllables and then tracked how well they could recall them. He would work with subjects until they learned the list and then retest them periodically. He also measured how long it took for people to relearn what they had forgotten. Ebbinghaus’ research identified things we take for granted today, such as the “primacy and recency effects” (which just means you often remember the first and last things in a list and forget the ones in the middle). His research led him to believe that there is no such thing as permanent memory—everything can be forgotten if not used. Below is a diagram showing an approximate “forgetting curve.” It isn’t pretty if you are a professional trainer, or a business manager.

The forgetting curve helps explain

  • Why people pass tests in a training class but are not able to execute the performance on the job. (This assumes the training wasn’t off-the-mark or that the local practice differs.) 
  • Why spending lots of effort refining lectures and presentations is a game of diminishing returns. As a rule, people aren’t going to remember something you said for very long.

If you think about this from the standpoint of a learner, this should be obvious. Think about the last meeting or presentation you attended. Can you remember even one slide? Yet, you know that presenter thought carefully about each bullet and graphic on every slide. But, did they spend enough time thinking about how to get you to internalize their message?

For results, we need to focus on the desired performance, how to develop it effectively, and how to measure that you have actually achieved it.

Importance of Repetition

Most of the researchers focus on strategies for improving recall. Though the specifics vary, in general they all involve repetition (or review). There are different formulae for how frequently and for how long the repetition needs to happen but they all agree on the need for it.

Unfortunately, repetition is a hard sell in the business training world. For one thing, it requires time. For another, it seems like something people should be held responsible for “doing on their own” rather than using training time, which could instead be used for learning new information.

Maybe the biggest problem with relying on repetition is that it doesn’t feel like a strategy to use with adults. It feels demeaning—sort of a “brute force” approach to learning. Learning “by rote” has a negative connotation.

In fact, repetition may be the right strategy in some cases, such as salespeople learning product features. But before you take the plunge, it is important to be certain that straight recall of facts or information is really the desired performance. Often information that is conveyed through lecture or presentation could really have been distilled into a tool (e.g., reference document) which would negate the need for the high effort, high cost work of memorization.

Importance of Reinforcement on The Job

The forgetting curve should also make us think twice about investing in training without at least considering the post-training environment. Beyond recall, is transfer. Transfer is using the new learning in the job setting.

There are a number of factors that impact how well transfer happens but immediate use and reinforcement are key. In the learning model shown below, you should be spending the majority of your energy on the boxes outlined in blue.

This is why it is important that trainers do not create content in a vacuum—we need master performers to ensure that we teach what the learners will be doing on the job. Still, there are times where all field settings are not using the “best practice.” Embedding best practices in process information, tools (e.g., software), metrics, etc. helps build a stronger web of reinforcement, which increases the likelihood that performers will do the work the way you trained them to.

The Best Solutions are Always Systems

Ultimately, all performance solutions reside in an environment of training, information, process, coaching, tools, incentives, and culture. It is always more effective to address more than a single element. The iPod is a great innovation…but without digital music (and the iTunes online store) it wouldn’t have been successful. In the same way, great training won’t make any difference without the rest of the performance environment. To go even further, training addresses individual performers but the performance environment affects all performers. Interventions that improve the performance environment are likely to deliver a broader and more lasting impact than even excellent training can.

Pivoting to Remote Training

Since about March, maybe a little later, we’ve been busy helping clients convert their in-person training to either remote instructor-led training or self-contained web-based training. It’s a challenge because it seems simple on the surface, and it may actually be simple…but based on our observations, it is apparently easy to get wrong.  

First of all, if you are thinking about this now, you are probably too late and probably underestimating the level of effort. Nobody allows enough time, even if it is part of a larger strategy. But if you are trying to react to the COVID-19 situation, the easy part is deciding to “go virtual.” After that decision, there are a ton of things to be figured out — instructional design issues, infrastructure issues, and capability issues to name three. There is a lot of planning, preparation, and practice before you should start doing.  

There are basically three scenarios to consider when switching to virtual training — from traditional instructor-led, in-person delivery to 

  • Instructor-led remote delivery 
  • Self-contained web-based training 
  • Shift away from training altogether to performance support 
  • There is a fourth option — a “blended” solution. But, though blended is often a better solution, blended solutions are really just a combination of the challenges of every component. 

Let’s start with the first scenario — shifting from traditional instructor-led to remote instructor-led training, either using a tool like “Go to Training” or just a web-meeting utility such as Zoom or Microsoft Teams. We’ll address the others in later posts. 

The first consideration is, or should be, instructional design.1 This will drive the requirements for infrastructure, materials, application/transfer, and assessment. 

Time. One of the biggest changes from going virtual is the time available for instruction. Going from classroom to remote/virtual, that time is very likely to be reduced. Things take more time via a remote meeting software than in an in-person setting. Interaction is more structured — if you ask the group a question, you typically have to warn them. You have to watch a separate part of the screen to see if anyone is responding. You might have to remind them to turn their mic on (and wait for them to do it). This process compounds because it makes every interaction take a little bit longer.  

Learner Attention. Also, keep in mind that learners in a remote learning setting (often) aren’t really off the job. Or at the least, they are not 100% focused on the training. They may be sitting in a home office trying to fit the learning in along with job and personal tasks (read “emails, baby-sitting, contractor/household management, errands/etc.)  

To maintain attention in a virtual setting, you need to rely on activities and exercises to engage the learners. And those need to be strategically designed to create the intended learning. Solid purpose, clear instructions, debriefs to ensure/clarify the learning, and some kind of verification to make sure everyone actually did them. 

Content Delivery. Remote, online delivery lends itself to a “flipped classroom” approach, where content acquisition activities (instead of lectures, think readings, videos, interviewing experienced personnel, independent research) take place outside class time. Of course, you have to spend the time to  find and verify that those resources exist and fit your intent…unless you have the resources to create them. 

“Cut to the Chase.” Often instructor-led training is based on a single instructor’s view of what learners need to know. In a classroom situation, there is a great deal of flexibility in how the time is used (and often very little oversight).  When shifting to remote learning delivery, things need to be more structured and prepared in advance.  Quite often, learners will drive accountability — they will not hesitate to suggest more efficient uses of time or more effective ways to reach the course goals. In person, the learner has to sit there anyway but in a remote delivery setting, the learner could easily switch to doing something else…and often that something is nagging at their attention, so learners have increased motivation to “cut to the chase.” 

Infrastructure. The next consideration is infrastructure. For many corporate training programs, infrastructure can be taken for granted. But not everybody has a fast network at home. Or a printer that can crank out a two-sided course manual without using up a small hillock of toner.  

And, when you are doing the training, you need to figure out how to manage learner activities. When people need to do a breakout activity, where will they “go”? Maybe they can log into a separate team meeting, but then, how will you communicate with them to keep them on track? How will they hand in any assignments? All things that need to be figured out.  

Supporting Materials and Equipment. Even something as simple as a manual can present some challenges. Is it fair to expect people to print out a manual? Is it risky to distribute easily duplicated PDFs for learners to use? Will the advantages of using an electronic document (search, portability, highlighting and comments) be lost on less technically adept participants? 

For some technical training, the investment in lab equipment and simulators may present another significant obstacle. One client created simulator kits housed in a suitcase-style case sent to remote offices. (This ensures standardization but also makes it a little more difficult for the audience’ workplace to cannibalize them for parts). Individuals can check them out to complete training. In another case, a client company had learners log into a remote set of equipment simulators (housed in an unstaffed training facility) from their remote locations to complete exercises. Or it may be possible, in some situations, to create software simulators. 

There are also practical parameters that you really can’t overlook or minimize. You will need to decide on some organizational standards. For example, what is the maximum duration of a course that will be tolerated/accepted? Generally, more than two hour chunks are difficult to pull off. But if you can do a couple of sessions per day with homework in-between, before, and/or after, you can get a fair amount of learning time. But you need to keep time zone differences in mind.   

Capability. Finally, let’s look at instructor capability. There are knowledge/skills that need to be gained by instructors (and others) involved in remote training delivery. Some issues include: 

  • Planning is a must — it won’t work to wing it 
  • The instructor needs to be able to use the remote training tool — it may even be necessary to add a new role, the producer, who can focus on the mechanics while the instructor focuses on content and learning. The producer can watch for participants raising their hands, make sure the mics are muted (or not), confirm that the display/sharing is correct, monitor chat messages, and so on. 
  • Preparation — you have to think about how you are going to explain concepts, ask and address questions, debrief exercises when using the medium. You may not be able to draw. It may be a challenge to ask for and “flipchart” responses from the group. You may have to target questions to specific participants to ensure a response. 
  • Providing individual feedback — if the goal is to get each participant to a level of competence, the instructor will need to observe and provide feedback to an individual level at some degree. 
  • How to rely on or supplement external content delivery, e.g., reading assignments or “YouTube-style” videos viewed outside the class.  
  • Changing your perspective to learner-centered (vs instructor-centered) instruction. (Well…this is a good idea for in-person training as well.) 
  • Develop materials for delivery via remote — for instructors that build their own materials — a single computer screen is likely the entire real estate available. 

The key takeaways are obvious. Applying many of the above ideas will improve in-person training once things go back to normal. (Of course, things probably won’t go back – remote delivery is likely to continue as audiences learn to rely on it).  

But if the takeaways are obvious, it requires leadership to set the direction and provide the resources and support needed to be successful. Your team can almost certainly make this change, but they need to believe it is important (not just for the short term) and have the backing to get ready before being expected to risk their reputation trying something new. Make a plan, test in small increments, and be ready to learn quickly as you go. 

Testing Strategically, Part Two


In the previous post about testing we discussed the importance of testing the output rather than the process wherever possible. And the importance of understanding the real performance you want to test, rather than simply counting something that is easy to count or track. This post will provide a model that can be used to guide thinking about where and how to test a complex performance, which is what most on-the-job tasks are.

Given that you want to test strategically, how can we think about capability structurally so we can discuss testing with others and come to a reasonable agreement? I think of it as where to put the thermometer. Whether you are cooking a turkey or smoking a brisket, temperature is important. But where you actually insert the thermometer will make a big difference in the temperature reading that you get. If you are testing a turkey and accidentally push the thermometer too far and go into the cavity, you will get a reading that does not match the temperature of the meat. Think of performance testing that way. If you could see the parts of the performance, you could determine where the thermometer needs to go to measure an appropriate range of things and which measurements would provide the best readings.

Below is a generic model of performance. Notice that there are a series of steps leading to the production of an output. These two components are the primary elements of the Performance. Supporting the performance are the knowledge, skills, and attributes the performer needs in order to execute the performance. At the farthest/lowest/most fundamental level are prerequisite capabilities. These are actually supporting capabilities as well but they are basic enough that you can decide to ignore them for the purposes of any training or testing. However, if the prerequisites are critical to performance, you may decide to assess people before they are selected to learn or execute the performance.

Hierarchy Diagram


Let’s look at an example.

Imagine that the performance is a call center agent. They receive a call and follow the steps in the call flow (for example, greet the customer, confirm the need, secure the account, determine a solution, etc.). Let’s say the output is an order for whatever the call center is selling.

For the order, the output, you could measure one or many of a number of parameters, depending on what is most relevant to the business.

  • Dollar value of the order
  • Volume of orders
  • Potential for additional sales (e.g., a customer order entirely new service vs an existing customer adding only a minor feature)
  • Perhaps whether it was a strategic sale, that is something that is important to the future of the business (e.g., a customer buying internet service instead of just telephone service or buying something with a higher profit margin)
  • Customer satisfaction, that is, you might value a sale that stays sold rather than one in which the customer calls back the following day to cancel because he or she was “pushed” into buying something
  • Technical accuracy, that is, all the necessary information was entered into the system correctly or that the order is compatible with what is allowed or available in the customer’s market

Those measures can all be taken without observing the agent’s performance because they focus on the output. If we started from the output, our next decision would be whether the results of the output measures sufficiently tell us what we needed to know. (Keep in mind that testing is done for a number of purposes, including performance management, compliance with rules/laws, or capability). If you were only testing to evaluate who should get paid how much sales incentive, you may not need all these measures.

But what if you were testing performance to verify capability to perform? In that case, you may want to consider additional testing on the process in order to evaluate the agent’s ability to interact effectively with the customer. For example, you may want to confirm they obtained the right information from the customer before discussing the account. Or you may need to confirm that they probed for additional needs or promoted the company effectively. You wouldn’t be able to assess that from the output. But, if the agent did all the steps but didn’t get the orders, their performance would be unsatisfactory. So, it appears you would need to test both output and process.

During training, additional testing may be needed. As individual skills are learned, for example using the ordering system to access customer accounts or configuring customer orders to work with their equipment or existing service packages. Even more fundamentally, you may want to verify that the agent can properly explain or answer customer questions about the various services and potential benefits.

Finally, there are the prerequisites. Continuing with our example, call center agents need the ability to talk while typing, that is, converse with the customer while entering information on the computer at the same time. Someone without pretty good keyboarding skills would not be able to keep up with the pace of a customer call. But, it would be expensive to hire employees and then train them to type until they are fast enough to do the job. A better hiring strategy is to hire people with the keyboarding skills already and, to make sure we don’t waste a lot of time and money, it would make sense to have an incoming assessment of keyboarding skills as part of the selection process prior to investing in training.



This structure provides guidance on decisions about how and where to perform testing. To test “above the line” (i.e., observing steps or evaluating an output) you have the option of performing on-the-job testing. In this situation, by defining the criteria and creating a simple evaluation tool, you can leverage supervisors and top performers to perform testing while also getting business done. And, the testing instrument can be used for both testing and coaching. Of course, in a training setting, you can also test “above the line” performance using simulations and exercises.

This same type of structure fits all kinds of jobs, from technicians, to sales people, to managers, and others. All jobs produce some type of output by performing some series of steps. (Actually, most jobs are responsible for several outputs grouped in various areas of performance.) Those steps require supporting knowledge and skills. We’ve observed that “higher level” jobs tend to require more supporting capabilities and the process (and sometimes even the output) is not defined as clearly or consistently as a “lower level” job. Lower level jobs tend to be procedure-oriented so that the performer has a longer series of defined steps to perform but fewer supporting capabilities, due to the availability of additional tools and reference materials. But the structure fits and can be used to determine a testing strategy (as well as a development or training approach) for any role or process.

One important part of managing capability is measuring the (performance and supporting) capabilities needed for key roles/jobs and then building a “supply chain” to deliver those capabilities to the workplace through a combination of selection, training, and testing/verification. Using the model shown above can help improve consistency in approach and results across an organization.

For more information on testing or capabilities, explore PRH Consulting resources or check out ISPI’s book Handbook of Improving Performance in the Workplace: Volumes 1-3,  ISBN: 978-0-470-19067-8. Check out Book 3, Chapter 12 Testing Strategies: Verifying Capability to Perform. 

Test Intentionally and Strategically, Part One

I have a friend who has done some temping in the past. Apparently, temp and staffing companies like to use commercially available computer-based tests to assess the capability of new applicants.

If I understand it correctly, the tests are set up to give you a task to complete. For example,

  1. Create a 4 cell x4 row table.
  2. Put “xyz” in the top row.
  3. Now make it bold.
  4. <Insert next micro-step here and continue>

Basically, the test leads you through steps, instead of telling you the end goal and assuming you can figure out the steps. The software keeps track of where you click and, if you click the wrong place, it is counted as wrong. If you click the right place, you get the points. Sounds pretty slick, doesn’t it? You can screen out people who don’t know what they are doing, the results are indisputable, and it doesn’t require any management time to review and score the tests. What could go wrong?

Actually, quite a bit. It’s always worth some skepticism when something seems that easy. But there are a couple of specific and serious logical flaws to the approach described above. And, lot’s of companies are using similarly flawed testing strategies.

Specific to this screening strategy, the biggest issue is understanding how people use software. But this can be generalized to other performances as well. The issue is output vs. process. Or, results vs. task. Here is what I mean.

Think about how you might go about building a table in a document using Microsoft Word. First you do some mental planning to figure out what the table needs to look like, for example, whether there should be borders, the number of columns and rows, headers, etc. When you start to build the table, you might or might not look under the right menu heading on your first attempt but, if you know what you are looking for, you will find the right option fairly quickly. In fact, there are a couple of ways to create a table (for example, you can start from the icon or the menu) and, as long as you end up with a table at the end of it all, your approach should be acceptable. The thing to measure in this case is not the process but the output. The criteria for what constitutes a “good” table can be specified. For example, correct margins, width of borders, number of columns, correct settings for the title row, etc.  There might even be some things that can only be evaluated by looking at the file (vs. a printed document), such as making sure the user didn’t use tabs and hard returns instead of setting up columns. These tests never get that far though.

Basically, the test confirms that you can create a table if someone tells you every step along the way. In any case, the order and location of your clicks doesn’t really determine the effectiveness of your performance. Looking under the wrong menu heading, realizing your mistake, and then going to another doesn’t mean you can’t do the task. In a way, it means you can do the task…because you know what you are looking for, just not specifically where to find it. Sure, at some point it matters if you take too long, but it needn’t be a primary concern even if the person had to consult help…chances are that if it was a task that is performed frequently on-the-job, they would learn it better and get fast enough, soon enough. If it isn’t a frequently performed task, consulting help is perfectly acceptable.

But, if you are tracking where and in what order someone clicks, you are evaluating what they do, the process, instead of the output or result. In a case where there is no one right process, the test is invalid. You are checking whether someone remembers the steps…not whether they can produce the result. Instead, the tester needs to find a way to evaluate the output. Think of it this way, if you send your teenager to run an errand, is it better that they get to the store and come back with the right groceries or that they used a specific route? (Okay, maybe you want them to stay off the highway or not swing by their friend’s house but still…)

Ultimately, when you are designing any test, it is critical to start by defining the performance you want to evaluate and then determining a strategy to evaluate it. Avoid being led astray by solutions that are simple to implement — it is always easy to measure unimportant data. (This seems to happen a lot in the world of computer/web-based training because computers record every transaction so it is easy to count them.) Decide whether process or output is important.

Sometimes both the output and the process need to be tested but in general, if you can sufficiently assess capability by evaluating the output, it is both more efficient and valid. Usually, the people being tested prefer this approach as well, because it allows them to be assessed based on their ability to get something done. It measures something closer to their eventual job performance.

Certainly in some cases, it is important to standardize and evaluate the process. Maybe some key performances aren’t visible in the result. The example we use a lot when talking about performance testing is cooking a turkey. Sure the result has to look and taste good (that is, the output) but it is probably a good idea to monitor the process as well, to ensure safe food-handling techniques were used and that there was no cross-contamination. Even this could be evaluated by the result (i.e., verify that no one became ill or test samples of the food for bacteria) but the risk involved makes it appropriate to expend the effort to evaluate the process in addition to the output. Because the consequences are significant, it becomes worth the extra effort and cost of testing the process. But, if you start with output testing and strategically backfill with process testing only where needed, you can least reduce non-value-added testing time and costs. And avoid making decisions based on faulty data.

Five Timeless Tips for Effective Training

We have designed and developed a lot of training since 2002 (when the company was started) or since 1984 (when Pete started in the business). Much of it was instructor-led, sometimes for professional instructors and other times for delivery by subject matter experts, leaders, coaches, and supervisors. Lately, much of it has been eLearning using rapid authoring tools such as Storyline, Captivate, QuizMaker, and occasionally Articulate Presenter. Though the delivery methods are different, there are some things that we have found to be necessary for learning to happen, regardless of the delivery method.

  1. Assume learners only remember what they do. Certain common phrases used when discussing training are used because they are convenient but when they occur too frequently, they can be warning signs. “We covered that.” “We need to talk about ‘such and such’.”  And even, defensively in response to a critique that something was left out, “it’s in there…”  But just because something was said, written, or shown doesn’t mean anyone learned it. At the very minimum, learning requires a learner to try something and then get some feedback (and, ideally, try it again).
  2. People only learn when they want to (or choose to). Somehow the instructional process needs to gain the learners attention and interest. This does not necessarily mean all learning needs to be a game. But the reason people like “just in time” learning is that they are about to do something they don’t know how to do. They want to learn it. And just in time also means the application (or “try it”) is imminent (see #1).
  3. Don’t skimp on context. Part of getting #2 to happen is setting the stage. For example, what we are going to learn, why it is important, and even what could happen if you don’t pay attention. And you can’t assume all learners understand all the prerequisites. Maybe you can review or summarize key terms or definitions or maybe just provide a way for those that need them to get them. Or, suggest that before trying the new thing, they should have experience with some preceding things. If you are learning javascript, a tutorial might let you know that it assumes you already understand HTML5 and CSS. Another key part of context is expectations — how good the learner should expect to get by undergoing this instruction.
  4. Don’t skimp on generalization. After learning a specific task, it can be helpful to summarize. Part of that summary should include other places where that learning can be used. Cooking shows often do this by describing variations on the recipe just shown. More neurons, more learning.
  5. Be clear, be brief. After all, in most corporate settings, learners aren’t learning to enrich their life experience. They are learning because they have to do something. Teach the basics first and let them practice. The nuances and details won’t make sense right away, so defer them. And keep in mind that you (the instructor/expert) are not the learner. Build bridges from concepts they know to whatever concepts you are teaching. Use analogies and metaphors. This is where it is key to know your audience because what is too simple/basic for some people may still be too advanced for others. Text is OK but sometimes graphics are better. But sometimes graphics are merely decorative. Focus on the instructional intent and figure out the shortest path to get there.


So the challenge. Keep the focus on making training effective regardless of the trendy delivery method du jour. And, we are sure there are more tips out there — please “share ’em if you got ’em.”  #timelessTips4Learning


Millions of Tiny Transactions

Work  Breakdown

A key principle of lean manufacturing is creating smaller batches, ideally, batches of one unit. It reduces inventory and enables a greater degree of customization.

The same thing has been happening to a lot of information work. Think about it. You get an email for which you need to provide a quick response. You have somewhere between five and a dozen (or more) active projects going simultaneously for which you are working on one or more deliverables. Emails and to-do’s are piled together in your inbox, each of which is one separate transaction, often small but still requiring time, focus, and a response.

Many companies use workflow management tools to track cases or open issues and to route information through the right people to make decisions. In fact, Microsoft Outlook tasks can be used for a lightweight version of this approach. They are all tiny transactions often piled together that will, hopefully, be recombined when finished into a useful larger deliverable.

What is the result? The intended result is getting more work through the pipeline by breaking it up. (That’s the lean piece of the puzzle.) Quite often, though, the result can end up being a lack of context to the performer. Each task is just a thing to do, disconnected from the larger picture of the project. To build that larger picture requires that you to stop and think about the overall intent of the project and where this task fits, what is important about it, what you need to actually do, etc. That “spin-up” thinking is the inefficiency that exists with multi-tasking…it is that all-too-familiar question “so, what are we trying to do here?”

Several years ago (more than twenty) I was talking to an engineer at a telecommunications company who made the comment comparing a previous job as a circuit design engineer to working in a closet where someone feeds a spec in on one side, they design the circuit, and then hand-off the design information to someone outside the closet. The engineer had no idea what the circuit was for, what the product was, what was important from a design standpoint…he was just expected to do the task as specified. It contained the work but left no room for interpretation or innovation. (Which was the intention but maybe not the overall best thing.)

What I’m wondering is, are we all moving in this direction in the name of productivity but actually making things worse? We often have so many things going on that we keep breaking things down to smaller chunks so we can move a task forward (and out of our inbox). At some point people get so overloaded that all they want to do is get rid of the task. How can you tell when this is going on? Some indicators might include

  • You send an email to a co-worker containing two questions and the recipient answers part of the first one. They apparently didn’t realize there was more (since they didn’t mention it or follow-up later with the rest of the answer).
  • You get on a web (or join an in-person) meeting and nobody has prepared. They feel like the best they can do is get there on time (or close to it). You try to begin the meeting and people want to re-discuss issues that had already been resolved in prior meeting because they didn’t remember.
  • You forget to do something because it wasn’t on your list and you don’t even try to remember things anymore…
  • …Or, you spend lot’s of time scrolling through your list (or email in-box) looking at the mass of items but not zeroing in on any specific items to complete because each item looks too hard or that it will take too long or it will just open a can of worms or…<insert your favorite demotivator here>.
  • A project you are working on gets delayed and you don’t really ask why…you are just relieved.
  • Your primary criteria for the right way to perform a task is whether it is the fastest, or maybe most expedient, one. You resist meeting to review things because it might mean that you end up with changes…that you aren’t all the way finished with something that you were thinking (hoping) you were already finished with.

Building Instructional Objects

With lean manufacturing, the small batch method works because the process is constant. With many jobs and projects though, the process varies. The context varies. The output varies.

When we are in a process or training design/development project, we use an object-based design model and it works great for us. It shows us all the pieces we need to build. It enables us to find things that can be created quickly and early. It helps us identify the “chunks” that will require extra work and time (so we can resource them accordingly). Sometimes there are availability challenges with SMEs or content source material or just decisions that mean we can start part of the work but have to wait for some other parts. In those cases, we would push the objects that can be built now to early in the process and defer the others.

But, there are often challenges for other people on the project who aren’t as familiar with the object approach. Reviewers aren’t always comfortable just reviewing a slice of a module…they have difficulty just reviewing the object for accuracy because they want to understand the big picture (that is, what comes before the object, what comes after). They may want to talk about objects that are on someone else’s list because they feel like it is important prerequisite information to their piece. (Which it is…just not their problem.)

And remember the “spin-up” problem? Other project participants have to “spin up” to address a single object as, to them, it risks being one of those context-less to do’s in their large pile of to do’s spanning multiple projects. When they grab it to complete it, they have to go through the “what are we doing again?” process before they can complete it.

We have been trying to address this issue in a few ways but certainly don’t have all the answers. One way is to remember that people always need context to perform a task well — so, when we ask for info, we include a brief summary of what it is needed for, what has preceded it, what will follow it. We do this in meetings too, for the same reasons. People need to get oriented before they can focus.

We have also been trying to keep emails to a single question or issue. That way things are less likely to get lost in the paragraph that follows because the person didn’t scroll down (or was doing a quick “read and reply” from a mobile device). Also, if you are one of those people that uses their email inbox as a to do list (which we definitely do NOT recommend) this can help you remove items when they are done and you don’t have the problem of what to do with one that is partially completed.

In the “old days” (the late 80’s and 90’s) we would hold longer meetings to focus on and make process and design decisions. These workshops may be harder to schedule at first but, if you can pull it off, a surprising amount can get done in a concentrated one- or two-day workshop. You do have to be careful that people really commit. Otherwise, people are ducking out for conference calls or other meetings and you can seriously lose momentum. Still, this approach has now become unusual enough that some clients find it appealing.

In fact, there are some cases where we’ve used a “war room” approach which is an expanded workshop that puts everyone in the room for maybe three to five days to actually hammer through, not only decisions, but building content and exercises. It can work if you prepare and get the needed people and commitment. The continuity benefits are huge and it compresses the calendar significantly. But, there is always the risk that somebody will want to “pop-in” via web-meeting at some specific time and expect everyone else to stop what they are doing, bring them up to speed, and then focus on their issues. Usually meetings like this rely on paper so it can be really difficult to make it even comprehend-able via a web interface. Even if all they want to do is listen in, chances are they will not be able to hear or follow the discussions and conclusions. We recommended that it is a better use of their time to review the output but when we’ve had to accommodate these challenges, we posted pictures or PDFs of the meeting flipcharts. This can help but, if you can avoid the whole problem, it is better to avoid it.


Beyond Tiny Transactions

There are several new tools now that try to use collaboration to get everyone’s input on a shared deliverable. This may be a reaction to the challenges of “tiny transactions” but it might also be just an extension of web technology. (Or, it could be both.) Rather than making one person coordinate all the disparate inputs from team members, you post some starter version of the deliverable (document, program, video, or whatever) on a shared workspace like Google Docs, Microsoft Office 365, Hightail, or the most recent offering from Dropbox (Paper). Then everyone pops in and adds their comments, contributions, questions, and ideas. And, the primary author can comment on their additions or even request input from specific people if needed.

These sound good in theory but I would like to see some real examples. When we’ve tried it, we find that most of our team members prefer to just use email and attachments. Maybe it is because it is a new way of working and people aren’t used to operating this way. But, that is not likely the real flaw in the approach. Any approach that depends on people being proactive in getting their input into a deliverable is likely to end up going without the input of several people (or, best case, getting that input late and only after a fair amount of prodding).  Comments can also often be useless. We’ve all seen reviewers who write comments like “unclear” or “needs to ‘pop’ more” or “more on this.”  You end up doing your best to figure it out but, when it comes to review inputs, it is infinitely better to get specific changes than general reactions. In this scenario, there will probably need to be a phone call or meeting anyway to clarify and negotiate specifics.


Does Productivity Always Mean More/Faster?

Productivity generally assumes you want to increase the output relative to the cost/effort. Usually, it entails doing the task in question more quickly (which is often referred to as “throughput”). But maybe that should change. Maybe the real key is not to focus on productivity, or else to change our view of productivity to incorporate other measures. Just as manufacturing went from volume to focus on quality measures (because, what good is it to produce 1,000 units of something if a lot of it ends up as scrap?), we could look at information work to determine which deliverables really generate value for the end user. It might even mean that going slower will produce better results. We already have plenty of outputs but there is rarely an oversupply of good ones.

And, even if we want to do more faster, the benefit of breaking things into increasingly smaller increments must eventually reach a point of diminishing returns…some would say we, as a culture, have already passed it. Have we reached the point where we should try to just do less but better?

Who Will Be the SME?

One consistent challenge in many of our projects is finding a subject matter expert, or “SME.” Especially when we are involved in emerging areas of work (such as new products or change initiatives) because, in these situations, there really is no one who has “done the job in the field”…there is no SME.

This can be a challenge but it is not always bad. In fact in many cases, even in established areas of performance, we find it can be better to have multiple SMEs providing input to the process. That way, everything gets vetted more effectively before getting included — you avoid the situation of “trust me, I know what I’m talking about.” As an aside, it is also not entirely bad if the individual content resources do not believe they are “experts” for the same reason.

So our search often shifts from finding individuals to be the authority(ies) to, instead, finding individuals to be the responsible content resources for specific content areas. They take responsibility for getting the information or examples or for checking diagrams or content, etc. But they may choose to do some or all of it personally or they may find a more appropriate or knowledgable person to do some or all of this task. In the design process, we create and specify an instructional process for building the desired capabilities. That drives the need for content. Then, we identify sources for finding the individual content components, which may be general topics but are often very specific items (e.g., “diagram of the XYZ product”).

At first glance this may mean a little more work to put everything together into a cohesive program when compared to just conducting SME interviews and writing down everything they say. But this process reorients the project from “we need to include…” to “where can we find ‘X’ because they need ‘X’ to do ‘Y’…” In other words the focus is on the end user’s performance instead of what the SME might enjoy talking about. The real benefit is that it forces the team to address gaps or difficult areas (instead of trying to avoid them). And it orients the program around “need to do” rather than “need to know.”

By the way, nothing against SMEs is meant here…they have a tough role because they have to explain everything to a layman (and sometimes more than once). It is often a responsibility added above and beyond their normal duties. Most of them prefer the specific content list, much of which can often be handled through emails instead of extended working meetings.

The bottom line is that effective performance interventions are almost always a collaborative undertaking. Starting with the performance and capability requirements identifies the needs objectively. Then, identifying SMEs for specific slices of content gives them a clearer picture of the commitment level. Of course, you still have to fit the project into their schedule, and of course there are still things that emerge later that will need to be incorporated. But, having a solid design makes that process much more visible and manageable.

Tips for How Not to Make a Deadline (Rant)

Just to be clear, these are examples of things nobody should really do. Don’t try this at home…or at work for that matter. Do the opposite of these things to be successful in every way (and avoid tormenting others). In other words…


…continuously require updates on status and progress. This is like pulling carrots up out of your garden every 30 minutes and wondering why they aren’t growing.


…get into a continuous revision loop. This is where you review something, send it back for corrections, then review it again and send it back for additional corrections, and then review it again and send it back…you get the idea. This means nobody is ever sure when it is done. And you can always find something to correct. (Don’t believe me? Go on IMDB…multi-million dollar movies, which use teams of people watching for inconsistencies over a year or more of development, still miss things. Underfunded but accelerated training development projects are certainly going to have some errors, no matter how many times you check.)


…mistake errors for problems. Agreed, nobody likes typos. Some grammatical rules are at least somewhat arbitrary and, in those cases, it is preferable to be consistent across the program. However, even though those things should be corrected, they shouldn’t be the primary focus. People will learn if you have the right content and practice, even if you botch a hyphen or two. They won’t learn (or will learn it wrong) if you have bad or missing content or don’t include any practice using the new learning. Basically, focus is a finite resource so prioritize how you use it.


…Focus on technical accuracy at the expense of instructional effectiveness. Sure, fix the technical errors and make sure the information is complete. But, if you want people to learn, you have to show them, let them try it, and then give them feedback. You might need to work through a simple example, followed by a more complex example. You might need to work through how the new learning applies to their specific situations. Not saying anything wrong is not going to make a difference if you don’t include application practice. Because, they probably won’t remember anything you said. But they have a much better chance of remembering what they did.


…become the critical path and blame everyone else. In project management, the critical path is the series of tasks within which any delays cause the deadline to slip. This chain of tasks has no cushion…it is the “longest pole in the tent.” If you have never heard of this, it is pretty easy to conceptualize — think about when you are in a hurry to get out the door to get to work on time. If you set up your coffee maker the night before so you can push the button, then take a shower while the coffee is brewing, you were basically removing the coffee brewing process from the critical path. The coffee is no longer the thing that will make you late. (In my case, it is letting the dog out and waiting for him to finish his business..but that is beside the point.)

Anyone involved in development of any kind knows that, sooner or later, we become the critical path because we have the last task to complete. However, before that, if we have published draft versions but the client/SMEs haven’t reviewed them, the reviews are the real critical path. This is not good for anyone because the reviewers often don’t realize that. They assume the developer has magical powers that enable them to finish things instantly no matter when how long it takes or how close to the deadline the changes get to the developer. We don’t.

Also, kind of a more subtle point, but large amounts of last-minute change churn means that it is less likely that you will find real gaps or issues because there is no time to step back and look at the deliverable as a whole. (This is why we prefer to work top-down and not focus on details upfront but start with a design, then frame the main exercises and content, then address all the details with proofing for typo’s, formatting, etc. in later drafts.)


…Change requirements after the project starts. Duh. Or at least don’t expect it not to impact the schedule or cost.


…Bring key reviewers into the process late. We understand that the higher in the organization the reviewer is, the busier they are, and the more likely they will tell you to “put something together and let me look at it before you send it out.” This is a good way to waste time because they will give you any direction they think makes sense, even if it requires a complete re-do. Better to get them to provide that input upfront, so you don’t do everything twice.


…Start the project unless the necessary resources are committed and available. It is easy to make the assumption that people are conceptually on-board and they can always fit in a few additional meetings or document reviews. Maybe but not always. And not always the people you need. Someone popping into a 4-hr meeting for an hour may not be sufficient — you may need to add another meeting later. Or someone taking a week to fit in a one-hour document review may mean your next version is a week later. In either situation, your deadline is in jeopardy.

We suggest building a team plan for the project upfront where you fit the activities into the calendar end-to-end and then get team commitment to make their dates and hand-offs. This doesn’t guarantee you won’t have hiccups, just that the deadline isn’t only your problem.

Determining the Necessary Capabilities

There are really three keys to designing and developing performance and training solutions that improve capability

  1. Understand the work
  2. Understand the knowledge, skills, information, and traits needed to perform the work
  3. Design effective strategies for enabling performers — in this case, effective means taking the shortest path to performance

Several months ago, we published an article about “Understanding the Work,” which addressed analyzing the requirements for the work that is to be performed. This is the critical first step for any human performance project — if you don’t know what people are supposed to be doing, how can you manage, improve, or teach it?

But the second step is important as well, that is, determining the required knowledge, skills, information, and tools needed to perform the work. We refer to these as “supporting capabilities” because they support the performance. You only need them if the performance requires it. For example, cashiers used to have to be able to count out change based on the money the customer gave them. Now, they just read the total change needed from the cash register and assemble the change and hand it all over. Small difference but a change in the performance requirements. The tool, the cash register, changed the supporting capabilities required.

So, to determine supporting capabilities basically requires plowing through all the tasks and situations and determining what is needed to get them done. It can be tedious but it is important. The reason that it is important is that, in many cases, these requirements are not well understood (or are not agreed upon). Often people assume certain skills are needed that are really only optional (or even unnecessary) while others are overlooked entirely. And individuals often have very different ways of labeling skills. But, if HR, management, training, and even engineering were to work from the same view of the performance and supporting capabilities, there would be opportunities to make the work efficient, reduce the learning curve cycle time, and even develop new tools (like references, databases, apps, forms, etc.) that reduce the level of capabilities required. Reducing knowledge/skill requirements reduces costs and errors but, more importantly, also makes it easier to recruit and allocate employees to tasks as the workforce or workload changes.

Without going into too much detail, below are some principles for figuring out the supporting capabilities needed to perform a specific task or process. We have used all of the following four methods (in different situations and in differing degrees) and found that each has specific strengths and weaknesses.

  1. Start from the tasks and identify the knowledge, skills, info needed to perform each task. This can be done starting from a specific scenario and then generalized or it can be done thinking about all situations at once.
  2. Start from categories of supporting capabilities and identify which are applicable to which process or task. (You can use a simple, targeted, or detailed set of categories.)
  3. Brainstorm. This is not recommended because it results in a list of things that is simply not useful.
  4. Pick from an existing “shopping list.” Also not recommended because it is too easy to generate too many items to realistically address — people can make a case for almost any skill being needed for any task.

Before we go into additional detail on each of the following, we will assume our intent is to a) identify the needed capabilities, b) decide where performers will get them (e.g., will we hire for them or teach people), and c) decide how we will convey and verify them, that is, the training and testing that are needed.

Starting from the Tasks

This is the most rigorous and effective method. You can see exactly what is needed and why. For example, instead of a general capability (like “classifying product defects”) you see that the performer is using a template and recording the results onto a standard quality control form.

In addition, you can make smart choices about how to train for this capability.

And you avoid just throwing things in because they are someone’s favorite topic or because it sounds like a good thing for people to know.


Starting from a List of Categories

Categories can help to structure the analysis and, depending on how the categories are organized, can even help with design decisions. Categories may align with a specific subject area (like computer systems, tools, interpersonal skills) which may have a collection of training offerings that can be used “as is” (or at least used as a source for content).

Categories may speed up the analysis process because they give the analysts a narrower range of options to consider at once. And sometimes, categories may be ruled “out of scope” which further accelerates the process.


Brainstorming and Picking from a List

These two approaches are not recommended because the results are not very useful. Open brainstorming results in a list of not only apples and oranges but sushi and roller rinks and everything in-between. If you aren’t careful, you can end up restating all the tasks as well — a task like “negotiate contract terms” can end up generating skills like “contract negotiation” which is just redundant and confusing.

Picking from a list (e.g., a “skills dictionary”) seems like a good idea but it can be limiting as the options are pre-defined.  Usually, they are too general. In fact, they often intentionally contain only the general items and ignore items specific to a single role, process, or task. But those unique, “performance-driven” items are often the most important in determining actual capability to perform the job task. (The general items are better for overall selection and development because they enable people to perform in different situations.)

Another problem is that, it is just too easy to think of a scenario where everything applies eventually. And, these types of documents are usually complex and will bog down the process. In a larger-scale analysis, such as for an overall curriculum, you need to cover more ground more quickly. And for a smaller-focus effort, you need to be specific but still fairly succinct so that, in development, the details can be fleshed out when there is more time.


Example 1: Overall Role Summary

One method we have used is to condense the capabilities (both performance and supporting) onto a one-page (granted, it is a large page) form to depict what people need to do and what they need to know. The bottom portion of the chart shows the supporting capabilities. These are often grouped by whether these are things you could expect to find in a person you bring in from within the company but outside this area of work, from outside the company but within this industry, etc. Someone totally new would only be expected to bring in general skills (e.g., MS Office proficiency).  Individuals vary but this would give you some direction on what to train for and what to recruit for.

Here is a simplified sample (used for a case study presentation in 2008).












Example 2: Close-up of a Process or Task-Based Analysis

Shown below is a closer view of how supporting capabilities can be aligned to the performance(s) they support. 












How This Information Helps

When designing work processes, task assignments, or training it can be helpful to know where specific supporting capabilities are needed.

  • If a given supporting capability is used across multiple tasks or even processes it may make sense to put the training for it earlier in the training path. If it seems appropriate, it may be contained in a separate course or courses. On the other hand, if it is only used for one or two specific tasks, it may make sense to embed that training with the training on that task.
  • If a given supporting capability is a key factor in making specific tasks difficult (hard to learn, causing errors during the learning curve, etc.) it may be a candidate for development of a tool to help standardize the performance — in other words, offload the hard stuff to the tool so it requires less skill to perform.


In all these situations, there is still a need to think about the development path, target audience, availability of existing training or resources, learning sequence, and many other factors. We advocate a thorough analysis but documenting the information in a format that makes it easy to assimilate. This allows the design team to come up with solutions that make sense at the big picture level. Having the information available to make intelligent decisions goes a long way to maximizing the effectiveness of the any learning or performance support deliverables and, ultimately, the competitiveness of the business.




Public Domain

In the previous issue, we teed up the idea that all graphics (or any content) on the web are not necessarily fair game for re-use. There are a number of myths about which and when graphics can be used, many of which fall under the heading of “public domain.”

First of all, we are not a law firm so nothing in this article should be considered legal advice. This article includes some of the guidelines we’ve collected from the web and conversations with people who should know. But if you are thinking about using a graphic for your own purposes, we suggest you check with a lawyer. In other words, if you get in any trouble following or ignoring the advice in this article, it is not our fault.

If you are still reading, let’s take a look at public domain. Public domain is content that is not protected by copyright. Either the copyright has expired or, for some reason, the work is not eligible for copyright.

Eligibility can be complicated though, because the rules have changed over time. You have to look at when the work was created, who created it, and when it was published (and if it was published). The more we looked into this, the more confused we became. Once we got sufficiently confused, we built a diagram to clarify it. Click on the image below for a larger version.

To read the diagram, just follow the flow from the top. For example, if something was published before 1923, you can freely copy it. If it was not published, you have to wait 120 years after the work was created. If it was published after 1923, continue down the tree. The bottom line is that the intent is for works not to enter the public domain during the author’s lifetime.


Of course an author can always decline to copyright their work and put it directly into the public domain. You will occasionally find things like this on Wikimedia.

Chart aside, public domain is really not that complicated. The real challenge is “fair use.” We may discuss that category of work in a future issue. As a preview, fair use includes only guidelines but no hard and fast rules. To actually get a solid decision on fair use, it has to come from a judge… and, let’s face it, nobody wants that 😉