CAT | Uncategorized

Sep/15

10

New book: Speed to Proficiency

Speed to ProficiencyI’m pleased to announce that my new book, Speed to Proficiency: Creating a Sustainable Competitive Advantage is now available on Amazon in paperback and Kindle.

Learn how to change from providing “so we did it” training to creating learning initiatives that produce capability change. Everything is covered: Aligning initiatives with the business, understanding the roles and responsibilities of stakeholders, weaving together training with reinforcement and coaching, integrating informal learning and performance support, and selecting the right learning technologies.

 

No tags

Continuous-Learning-Model

 

 

 

 

 

 

 

 

 

 

Proficiency isn’t attained in a class, it takes a systematic combination of training, reinforcement, and informal learning.

Being in the readiness business is really being in the business of building capability; of helping learners move from novice to expert in as short a time as possible – what we call speed to proficiency.

To do so, we need to use a full range of learning interventions. These include training, informal learning, performance support, coaching and mentoring.

As learning professionals, we need to have a thinking framework that helps us understand when and where to use each of these interventions, and how to best weave them together in a systematic way to produce speed to proficiency.

In this regard, I’d like to share the continuous learning model that we at Q2 Learning have used for over 10 years with Fortune 500 customers.

We see three phases of learning on the X axis, in the order that we may most often use them.

Training includes event-based formal instruction, such as face-to-face and online classes, self-paced eLearning, MOOCs, and other event-based instruction. Training is great for building awareness and a certain level of skillfulness – the ability to apply defined processes and procedures in standard situations.

Reinforcement includes planned post-training activities such as graduated assignments, coaching, mentoring, and other forms of on-the-job training. Reinforcement builds on the gains made from training. In our work with customers, we find that the key to achieving proficiency in critical job skills is a reinforcement cycle. That’s something that happens on the job, not in the classroom.

Informal learning includes learner-initiated “over the cubicle” knowledge sharing, communities of practice, experiential learning, and gaining skills and knowledge from performance support systems and other reference materials. Our customers have leveraged informal learning to maintain and enhance skills over time.

Build skillfulness with training, build proficiency through reinforcement, and maintain and improve skills through informal learning – and notice it’s the reinforcement and informal learning that drive to proficiency.

Excerpted from Speed to Proficiency: Creating a Sustainable Competitive Advantage. (c) Bill Bruck, Ph.D., 2015 (paperback and Kindle)

No tags

Classroom

Classroom

Learning Technology or Training Technology

Most learning management systems (LMS’s) are great at helping instructors replicate the worst practices of education electronically: lecture and multiple choice tests. When LMS’s go beyond this, there are still factors that cause us to think within a very short and narrow box called “learning equals content-based courses.” In other words, LMS’s support training (narrowly defined), not learning more broadly defined. Why?

  1. Focus on content. While the Experience API holds the promise of getting us out of the SCORM trap some day, the vast majority of courses contained within today’s LMS’s are SCORM 1.2 or SCORM 2004. These standards have learning professionals busy creating content objects, not learning objects.
  2. Focus on events. While the primary object creating by authoring tools is a SCO, the primary object maintained in the LMS is a course. Courses are almost always time-bound training events related to the mastery of content. However, most on-the-job proficiency is not created in an event, whether it’s a 30-minute eLearning module or a one-week face-to-face sales training.
  3. The sage on the stage. When you think about it, the courses in the LMS are all about the sage on the stage. The sage creates the eLearning modules. The sage teaches the classroom-based classes listed in the LMS. In web meetings, there is a presenter (sage) and audience.
  4. Assessing the unimportant. The bad news is that LMS’s make it easy to assess that which is pretty much trivial and unimportant – i.e. Level 2 evaluations using “objective tests,” where there’s a right and wrong answer. Critical thinking? Complex decision making? Ability to write effectively? Ignored.
  5. MIA: Informal learning. If you’re lucky, your LMS will have a rudimentary comment system or bolted on discussion forums. However, software to support informal learning that is integrated with other learning activities at the user experience and administration level is simply missing in action. That’s 75% of the learning that our technology doesn’t really address very well.
  6. MIA: Coaching and mentoring. I suspect that most learning professionals would agree that we are not done when the class ends; we also be in the business of supporting the reinforcement of training on the job. Most learning technology simply doesn’t do this. And that’s a shame.
  7. MIA: Knowledge management. Over the past several years, there have been many articles written on the convergence of learning and knowledge management (KM). I’m a great believer in this. It seems that if we are in the business of ensuring that people are ready to do their jobs, our learning systems should also be knowledge management systems.

I believe that LMS’s need to support the learning process, not simply eLearning and classes. Unfortunately, most LMS’s started life as content management systems. Content is in their bones and in their DNA. Later additions – such as those to support social learning – often feel bolted on and not an integral part of the system. As we start thinking about effective approaches to learning, we also need to start thinking about the functional requirements of learning technologies that can support them.

 

Excerpted from Speed to Proficiency: Creating a Sustainable Competitive Advantage. (c) Bill Bruck, Ph.D., 2015 (paperback and Kindle)

No tags

21c-Learning-System-300As L&D moves from being a training department to a learning organization, LMS’s will need to support the new types of learning required. To support this we need new types of learning systems.

In my view, a 21st century learning system needs the following characteristics, as shown in the picture above.

  • Social learning must be at its heart. The learning system should have the functionality required to effectively support informal learning; to integrate social activities into learning paths; and to integrate comments and peer questions and answers into knowledge bases.
  • The learner must be at the center. The user experience is key. I agree with an observation Elliott Masie and Cushing Anderson made a few years back when they advocated for changing the term we use from Learning Management System to Learning System. Personally, I think if we started thinking of it as a learning delivery system it might refocus many of our efforts.
  • It should support formal learning, including training, reinforcement, and process-based learning paths.
  • It should support informal learning, with communities, informal learning activity calendars, and social media.
  • It should support performance on the job with knowledge bases.
  • It should support a full range of learning activities, including courses, coaching, communities, knowledge sharing, performance support, and action learning.

Excerpted from Speed to Proficiency: Creating a Sustainable Competitive Advantage. In press.   (c) Bill Bruck, Ph.D., 2015

No tags

Aug/15

25

Who owns knowledge management?

Basic RGBIT owning knowledge management is like the stadium builder coaching the basketball team.

I was working with a disability and life insurance company a few years ago, and was talking to their director of knowledge management. She was livid. “You’re not going to believe what the IT director just told me,” she said.

“What’s that?” I asked.

“He came into my office with a big smile in his face. Then he said, ‘We did something for you last week. We heard that there were complaints that the knowledge base search function wasn’t really useable, so we fixed it! It used to be if you searched for life plus VT plus individual you would get 700 hits. Now you get over 2,000!’”

“I can’t believe it,” she added. “The reason we got complaints was the wild number of false positives. Who can work with 700 hits? Now he tripled it? And, of course, being IT they didn’t ask us what the problem was – they simply assumed in their infinite wisdom that they knew better than anyone else and went ahead and fixed it.”

Imagine that a builder gets the contract to build your city’s new stadium and does a fantastic job! And your city council immediately turns to him and says “You know you built us one fine stadium. The basketball facilities inside it are great. In fact they are so good, we’d like you to coach the basketball team.

That would seem pretty silly, wouldn’t it. Yet this exact thing happens in knowledge management with distressing frequency. If you believe, as I do, that knowledge management and learning and integrally connected, then the people who organize the organization’s knowledge, provide it to workers in the form of performance support, explicate tacit knowledge, and facilitate knowledge sharing need to be the same folks who are accountable for employees learning the skills they need to do their jobs.

This is why, in several organizations with which we work, L&D is called the Knowledge and Learning department.

Abstracted from Speed to Proficiency: Creating a Sustainable Competitive Advantage. In press.   (c) Bill Bruck, Ph.D., 2015

No tags

Aug/15

19

Minding our own Business

MYOBNot every performance problem is a training problem. We need to know which is which, and stick to the problems we can solve. Throwing training at all every performance problem doesn’t work, and guess which department gets blamed when it doesn’t?

Some years back, I was called by a person who introduced herself as the director of a large botanical park. “We need motivational training,” she said.

“That’s great, I’m in the training business,” said I. “How many people do you need trained?”

“200,” she said.

“In what size groups?” I asked.

“What do you mean?” she replied. “All 200.”

“Oh,” said I, not quite understanding. “And how long do you have for the training?”

“An hour,” she said.

“I see,” I said. “So what you really want is a presentation.”

“No,” she said. “I need them trained.”

“OK,” said I. “Help me understand your problem a little better. Why do they need to be trained?”

“They aren’t motivated.”

“Is that something new? When did this start?”

“Three weeks ago,” she said. “When they were told that everyone would be fired next month.”

Obviously, this person did not have a training problem.

If someone held a gun to their head, the employees could have done their job. They had a motivational problem, one that would not be solved by training.

In their seminal 1970 article Analyzing Performance Problems; or “You Really Oughta Wanna”. Robert Mager and Peter Pipe suggested that “When faced with a discrepancy between the actual and the desired performance of a student, employee, or acquaintance, the usual course of action is to ‘train, transfer, or terminate’ the individual.”

As learning professionals, we need to ensure that performance problems are learning problems, not motivational or structural problems. This is a primary function of the analysis step in ADDIE, to analyze the source of the performance discrepancy. If they don’t have skills, it’s a training problem. If they have them but don’t want to use them, it’s a motivational problem. And if they have the skills and want to do the job but have insufficiency authority, resources, or time, it’s a structural problem.

In the situation I mentioned, one solution would have been to take every Friday and help the employees write resumes, build interviewing skills, and use online resources to find jobs, in exchange for good-hearted effort the other four days. Insofar as part of our charter in L&D is to be performance consultants, this would be a great approach. Other performance problems may be amenable to recognition, incentives, resourcing, or other management interventions that are simply not in the purview of L&D.

But the bottom line is that we can’t throw training at every performance problem. It simply won’t work.

Abstracted from Speed to Proficiency: Creating a sustainable competitive Advantage. In press.   (c) Bill Bruck, Ph.D., 2015

No tags

ready-fire-aim2Rapid eLearning is basically fancy content presentation. And if content equaled learning, universities could be replaced by libraries.

A few years ago, I was working with a company who needed to revamp their project management training. The old training department had very little power, and basically was tasked with feeding whatever slide decks subject matter experts (SMEs) gave them through Articulate, exporting the result into SCORM, and calling it an eLearning module. The result was 160 slides, each having an average of 50 words. Unsurprisingly, few people learned effectively from this monstrosity.

Lessons Learned

The lesson that it should not surprise us to learn is that SMEs are not instructional designers. There’s a big part of me that is surprised that this even needs to be said. However, there is a popular notion in the learning world that if you give a SME an authoring tool that is simple to use, that useful learning objects will be pooted out the back end. We call this rapid eLearning, and honestly all too often I think that what is pooted out the back end is what one would expect is usually pooted out the back end.

How does the instructional designer add value?

The instructional designer (ID) is charged with designing learning activities that will teach a person with certain defined entry-level abilities to manage a project. Thus, the ID needs to:

  • Identify the knowledge and skills required to do the job. (This is almost always a small subset of the knowledge in the noodle of the SMEs.)
  • Express these as learning objectives that distinguish what type of behavior is required, e.g., create a project’s work breakdown structure v. analyze project risks v. use provided reference materials to find the answers to obscure questions that come up from time to time.
  • Identify which elements need to be taught during the course, are assumed as prerequisites, need to be available as references, or are skills developed during post-training reinforcement.
  • Determine the best sequence for teaching the knowledge and skills.
  • Decide on the best modalities for instruction.
  • Determine the most appropriate way to assess how well learners can apply the skills on the job.

Call me old fashioned, but I think these things are important if we want people to actually be able to take new concepts and skills and apply them on the job.

Abstracted from Speed to Proficiency: Creating a Sustainable Competitive Advantage. In press.   (c) Bill Bruck, Ph.D., 2015

No tags

Aug/15

12

The Goal is Speed to Proficiency

ASPTwo very intelligent colleagues of mine, Christy Keener and Tom Hilgart, introduced me to this simple yet extremely useful model for thinking about what it takes to enable a person to be ready for their job.

Awareness is “knowing about.” It’s the ability to define terms, know where resources are located, explain a business process or guideline. For example, employees need to understand the policies related to personal time off; failing that, they need to know where to obtain this information. Awareness is often associated with some of the lower order verbs in Bloom’s taxonomy: the ability to state how many days off I will accrue this year, and to list the times when PTO is not required (for instance, bereavement leave). For some things, awareness suffices.

For other things, skillfulness is needed – the ability to apply defined processes and procedures in standard situations. This is often true when a person is supposed to refer complex situations to someone else. For instance, a Level I Customer Service Rep (CSR) should be able to quickly, confidently, and accurately use the job aids and past knowledge to answer a defined set of questions related to the product he is supporting. She should in addition, recognize calls that she is not qualified to answer, and be able to escalate the call to a Level II CSR.

For our purposes, we can think of proficiency as the ability to do a complex task independently in novel situations. Another way of thinking about it is that proficiency comes when we shift from asking for assistance to providing it to others. It is when a person is proficient at the various tasks comprising her job, that readiness has been achieved.

So if we are in the readiness business (https://goo.gl/BSz2rY), we really need to start by understanding what readiness entails for each job that our audience does. We need to base our planning for becoming a learning organization on an understanding of what skills and knowledge are optional, and which ones are vital. We need to understand which areas a given person needs passing familiarity with, and where proficiency is required.

Achieving proficiency takes time and effort – on the part of the organization and on the part of the learner. Make no mistake, for those critical skills, it won’t just be about training. It will be a continuous learning process that may involve formal and informal learning, social learning, performance support, coaching.

If you agree that the fundamental purpose of the learning organization is to promote readiness, then the primary goal should be to develop speed to proficiency in our interventions, recognizing that proficiency may take weeks or months to achieve.

 

Abstracted from a forthcoming book on learning effectiveness  (c) Bill Bruck, Ph.D., 2015

No tags

VerbWe need to develop learning objectives that relate to the type of action learners need to do.

Learning Objectives

They say that good results without good management come from good luck, not good management. I would suggest further that without good performance objectives, managers don’t even know when their reports have achieved the results they need.

Similarly, as we prepare people to be ready to do their jobs, we might say that good results without good learning objectives come from good luck, not good planning. I would also suggest that without good objectives, it is almost impossible to assess whether we are doing the job we told our customers we would do.

Of course, there is a discipline to creating SMART learning objectives. We have taxonomies to help us create good terminal objectives. Bloom’s and Gagne’s are two that have stood the test of time.

Work-related behaviors

But as we design our learning interventions, I think this misses an important point.

We need to categorize learning objectives by action type. At the end of the day, what type of action is required on the job? Do we want learners do some physical action? Effectively speak about something or actively listen accurately? Or do we want them to be able to write something?

For instance, claim adjusters must provide written justification for their decisions that is (a) complete, (b) behavioral, (c) logical, and (d) consonant with contractual obligations and organizational guidelines. All that is great, but there’s one word that’s at the heart of it – “written.” They must be able to write.

And as someone who taught college for 20 years, the bottom line is that the only way to teach someone to write is to have them write, give them feedback and have them correct their work, and repeat the process.

While this seems simple, this fundamental approach of defining the type of behavior and then ensuring your training or other learning intervention produces that behavior is violated right and left!

  • We provide eLearning courses that tell people how to write, then test them on whether they remember the rules we told them. This doesn’t produce a writer, but a person who is aware of writing rules.
  • We give them writing samples and ask them to identify the writing errors. This doesn’t produce a writer, but a copy editor.

This suggests that the learning intervention must include producing and correcting writing samples, or some equally focused intervention.

Similarly, we might ask:

  • What type of learning intervention is required if we want learners to be able to accurately and empathically listen?
  • What type of learning intervention should we use to help learners to verbally present information accurately and persuasively?
  • What do we need to do to facilitate learners demonstrating a computer skill or a physical action?

It might be an interesting exercise to look at the various courses and other activities available to learners, and audit them with the question: Does what the learner does and learners in this course connect with the behavior they need to do on the job?

Remember: Without objectives we don’t know what we’re teaching. Without action-related verbs, we can’t connect what we’re teaching to what they need to do. It’s that simple.

Abstracted from a forthcoming book on learning effectiveness  (c) Bill Bruck, Ph.D., 2015

 

No tags

Aug/15

3

We must stop measuring learning

YardstickWe must stop measuring learning for one simple reason. The only two people who care about your learning metrics are you and your mom. And honestly, she doesn’t care. She’s just pretending to be interested.

An executive team meeting at ABC Inc.

Imagine being a fly on the wall at the end-of-year wrap up meeting of the executive committee at ABC Inc. The CEO says, “This is the time. This is the place. We need to know where we stand so we can plan our strategy for next year. I need you all to report out for your operational function.”

The VP Marketing goes first. “We increased our budget and we have purchased space in 50% more trade journals this year! In addition, we upped our Google Ad budget by 200% and have placed ads throughout the Internet.”

The VP Sales is next. “I’m happy to report to you all that an audit of our sales calls shows that 93% of sales calls used the SPIN selling model. Not only that, but our inside sales representatives handled, on average, 12% more calls than they did last year!”

The CFO then reported that accountants’ sick leave was down 15% resulting in a departmental cost savings of 5%, and that the new accounting system was in place and being used very effectively.

It would then come as no surprise that the CLO reported that 100% of employees took all required compliance courses; that there was an average 90% satisfaction with training, and that of the 600 courses in the catalog, 75% had been utilized by 15% or more staff, who viewed them for an average of 15 minutes each.

It should also come as no surprise that either all these executives were fired, or the company went out of business, because no one had their eye on the business results.

Of course activity-based reporting by a CFO or VPs of sales and marketing such as that outlined above is ludicrous and doesn’t really happen in well-functioning business. The CEO wants to know about sales figures, not how many people use the SPIN selling model.

Unfortunately, while activity-based reporting by the CLO is equally ludicrous, it’s all too often the type of reporting that the metrics generated by the learning organization support. The fundamental failure to talk the language of business is why – as I’ve argued above – training is all too often the first to go.

We must to better. We can do better. We simply have to put our minds to it.

Kirkpatrick revisited

It’s somewhat fashionable to poo-poo Donald Kirkpatrick’s four levels of learning evaluation. Okay, okay, I guess that there are newer, shinier models of learning assessment. But Kirkpatrick’s is an extremely workable model.

For those of you who are not familiar with it, Kirkpatrick suggests there is a hierarchy of evaluation methods.

  • Level 1 evaluations ask for learners’ reactions to a training course. Did they like it? This is the familiar “smile sheet” we often get after a training course.
  • Level 2 assesses whether, in fact, the person learned something. Did they retain the knowledge? This is often assessed using a post test to see how much of the content was retained.
  • Level 3 looks at whether they can do their job better as a result of training. Did they transfer the learning from the classroom into the work site?
  • Level 4 asks a different question – even if they can do their job better – does that lead to a positive result for the business.

To this, Jack Phillips added a widely accepted Level 5 measuring return on investment.

We won the battle but lost the war

Many learning professionals argue that Level 4 and Level 5 are the ultimate goals for learning. I disagree. My point of view about this came from an experience I had a number of years ago.

We were training insurance underwriters, who in effect were in the income-producing side of the business. We had had a real success – by all measures, the underwriters were able to underwrite more business with less effort than they had in the past. When we did a debrief with the VP of underwriter training, however, he was not happy. Apparently profitability was down. The folks who wrote underwriting policies had written policies and guidelines that were too “soft,” and loans were being approved that had a high default rate.

We trained the underwriters to use the guidelines they were given extremely effectively, and the more effectively they used the faulty guidelines, the worse it was for the business.

No one blamed us – thank God – but at the end of the day, profits were down, headcount had to be cut, and management was unhappy.

Level 3: The gold standard

What I took away from this, and what I continue to believe, is that for learning professionals, Level 3 is the gold standard. We “contract” with managers to help people do their jobs better, according to the policies, procedures, and guidelines they are given. We do not contract to produce business results. It’s a fine distinction, but we can’t contract to produce business results. There are simply too many other variables related to the market, the product, customer service, etc.

That’s why I believe that our job is to demonstrate a Level 3 result – can they apply their skills and knowledge to do their jobs better. This is where the sweet spot for learning professionals is. This is what – if we are thinking about results – we promise our customers. This manager will be able to give more effective presentations. That project manager will be able to manage risks and issues effectively. This underwriter will be able to use the guidelines she’s given and underwrite business effectively.

We must measure productivity

At the meeting of ABC Inc.’s executive board, the reports should look like this:

The VP Marketing says that market research shows that brand awareness is up 35% in the critical 18-25 male market.

The VP Sales says that sales are up 10%.

The CFO lets folks know that the company made a profit.

The CLO lets everyone know that the VP Sales indicated that 100% of sales trainees were able to handle simple to moderately complex accounts without supervision, and the executive team has endorsed the readiness of five high potential candidates to move into AVP slots should the need arise.

The CLO is now talking the language of business, and reporting results that everyone – not just his mother – is interested in.

Abstracted from a forthcoming book on learning effectiveness  (c) Bill Bruck, Ph.D., 2015

No tags

Older posts >>

Find it!

Theme Design by devolux.org