You may recall that behavioral psychology has had an important influence on the field of educational technology. The behaviorists insisted on measuring observable behaviors rather than on making inferences about thought processes.
This insistence on observable behaviors found its way into ISD by specifying outcomes in terms of behavioral goals and objectives. That means that we judge learners' success by comparing statements about what learners should be able to do with what they actually can do after they've participated in the instruction. That process might sound familiar as you reflect on the analysis process (optimals - actuals = gaps).
We can also use goals and objectives to help gauge the success of the instruction itself. We judge it successful to the extent that learners' performances following instruction correspond with the instructional goals and objectives.
Educational technologists have, over time, somewhat softened the strict behavioral interpretation of instructional outcomes. For example, a common instructional goal in education is that students will develop higher level thinking skills. "Higher level thinking skills," indeed any thinking skills, would be, if you will excuse the expression, unthinkable to the behaviorists.
To avoid confusion, we'll talk about instructional or performance goals and objectives rather than behavioral goals and objectives. But we mean the same thing as most people who use the older terms. That is, "What will learners be able to do after instruction?" Examining this question is the focus of Module 5.
Module 05: Connect
In this module:
- What will be done with the knowledge?
- Types of knowledge
- The content/performance matrix
- Clarifying "fuzzy" goals
Instructional goal analysis involves transforming broad, "fuzzy" goals into observable, measurable performances. This, in turn, means thinking about the range of possible performances.
The answer to the question, "What will learners be able to do after instruction?" will always fall into one of two broad categories. Learners will be able to remember specific things from the instruction, and/or they will be able to apply them.
What will be done with the knowledge?
Remember or apply? This is an important distinction. Who doesn't remember Einstein's famous formula E=MC2? But how many of us can apply it to solve a problem? Sure, I recall that it has something to do with the conservation of matter and energy (the fact that they can change into one another, but can't be created or destroyed), and I even remember that, in English, the formula reads "Energy equals Mass times a Constant squared." That's about the extent of what I remember about E=MC2. Ask me to apply the formula to solve a problem, however, and I couldn't do it without additional instruction.
The same distinction holds true in corporate and military training situations. A manager or officer might be able to list or even describe, say, seven types of decision-making processes, but might not be able to actually implement any of them. Using or applying what we've learned requires more mental processing and more practice than just remembering it. That means more, longer, and more expensive instruction.
Sometimes just remembering is enough. If you can remember the principles on which the impressionist painters based their styles, that may be sufficient if you are, say, a museum guide. In this instance, a little bit of training, just enough to help you remember and explain the principles, is probably adequate. There is no need to devote extensive resources to train you to be an impressionist painter. Should you wish apply the principles, however, and paint impressionistically, you will need to spend a good deal more time understanding and practicing.
In addition to determining whether your learners only need to remember or actually need to be able to use or apply their knowledge, you can classify knowledge in terms of four types, or knowledge of facts, concepts, procedures, and principles.
Facts, Concepts, Procedures, and Principles
Educational technologists like to classify knowledge into these categories for a simple reason: we learn each of these in different ways. If we know what kind of knowledge we're trying to teach, we can more easily prescribe learning strategies. When analyzing instructional goals, we want to start thinking about what facts, concepts, procedures, and principles we want learners to know so that, later, when we specify instructional strategies, we can do so systematically.
[Note that some theorists have also identified process as a separate, unqiue classification. In EDTEC 540, we limit our discussion to the following four classifications.]
Facts are unpredictable associations among objects, symbols, or events. For example, see if you can predict what the first word of the text of the next chapter is. Now check to see if you were correct. Without astonishing luck, you were wrong. But it's not because you're stupid, it's because there is no method or rule for accurately predicting the first word of book chapters. You just have to know what it is. You could try any number of ways of predicting it, but eventually you'll have to throw up your hands and admit that, "I don't know why it is what it is, it's just a fact."
Names, dates, and quantities are all facts. In 1770 you couldn't have predicted that the name of the first President of the United States would be George Washington. If we blindfolded you and dropped you onto a street in a town you have never visited, you would be unable to predict the name of that street, without seeing a street sign. You could accurately predict the year of the next international Olympics, because there is a rule to follow--the Olympics are held every four years. But you couldn't accurately predict the finishing times of the first three contestants in the 100 meter dash. Those are facts.
A good test to see whether something is a fact is to ask yourself, "If I didn't already know it, is there a rule or procedure that would let me accurately predict it?" If the answer is no, it's probably a fact.
A concept is a group of objects, symbols, or events that share common characteristics. "Computers" is a concept. There are many sizes, shapes, and types of computers but they all share some characteristics, such as having input and output devices and the ability to process information. You can think of a concept as a "definition."
"Elections" is a concept. Elections are held in many different countries, under different circumstances, and for different reasons, but they have some characteristics in common, such as campaigns, ballots, polling places, and so forth. One definition of an election is "choosing for office by vote."
If you're not sure whether or not something is a concept, you can ask yourself, "Is this a group?" My car, for example, would not be a concept, because it isn't a group. "Cars" would be a concept, though, because all cars share characteristics like wheels, motors, and frames. Could you predict what kind of car I drive? No? Then it's probably a fact that I drive a Mazda Musketeer, California license plate number "1 4 ALL."
You could also ask yourself, "Could I find a definition of this in the dictionary?" You would be hard pressed to find a definition of "Bob's car" in the dictionary, but you could easily find a definition of "car."
Procedures answer the "How" questions. If someone asks, "How do you conduct an effective sales meeting?" or "What's the best way to troubleshoot this toaster?" they're probably looking for a procedure.
A procedure is a step-by-step sequence for accomplishing a specific task. The procedure for brushing your teeth includes the steps: (1) locate toothbrush and toothpaste tube; (2) uncap toothpaste tube; (3) squeeze toothpaste onto toothbrush; and so on. This procedure only works for brushing teeth, it doesn't apply to, say, brushing a coat. Procedures apply only to specific performances.
Principles are guidelines or rules that can be applied in a range of situations. For example, the principle for spelling English words with i and e in them advises students to use "i before e, except after c.." You can apply this rule to any number of English words, including ones you've never heard or seen before. Unlike a procedure, it applies across a range of situations.
Principles can also be used to predict things. "Red sky in the morning, sailors take warning; Red sky at night, sailors delight" is a guideline for predicting the weather. It doesn't explain how to do anything, it just helps you understand patterns or cause-and-effect.
The content/performance matrix
Merrill (1971, 1986) and others have combined these four types of instructional content--facts, concepts, procedures, and principles--with the two types of performances--remember and apply--into a content/performance matrix (Figure 5.1). We have modified this a bit for EDTEC 540, your foundation course in Educational Technology. As your graduate journey continues, you will build upon this foundation frequently.
Perform the steps
Apply a rule
Recall an association
Recall a definition
List the steps
State a rule
Figure 5.1. Content/performance matrix, adapted from Merrill (1971, 1986) and Clark (1994).
The cells of the matrix contain learner performances that represent apply or remember performances for each type of knowledge. For example, if you are teaching social studies students about the differences between republics and popular democracies, you might ask them to remember the definitions of each of these concepts, but if you really want them to be able to apply the concepts, give them a series of descriptions of real or imagined governments and ask them to classify each one as either a republic or a democracy. The students would be applying their understanding of the concepts.
Similarly, if your real estate sales agents are trying to learn to apply principles for matching a family with a neighborhood, you could give them some profiles of families and ask them to predict the best neighborhood to show them, based on the rules or principles. That would take a little more practice than just remembering the rules themselves.
Notice that there is no apply facts category in the content/performance matrix, since facts, by definition, are arbitrary associations, solitary instances. You can't classify instances of them, they are instances. You can't make inferences based on them, because facts only refer to themselves. They're certainly not rules, because they don't apply to a range of situations.
Clarifying "fuzzy" goals
Goal analysis is essentially the process of taking what Mager (1984) calls vague, "fuzzy" goals, and operationalizing them. That just means stating your goals as observable performances. This is important for several reasons.
First, if your goals are "fuzzy," so will be your instruction. For example, if you decided you wanted your paratroopers to "know how to parachute safely," a conversation between you and your sponsor might go something like this:
- "Yes, sir, I taught them all there is to know about parachute safety."
- "Splendid. They know about strapping on the parachute?"
- "Yes, sir, I told them all about that."
- "Terrific. They know about hooking onto the jump line?"
- "Absolutely, sir, I explained the whole thing."
- "They know when to yell, 'Geronimo ?'"
- "They've got the pronunciation perfectly, sir."
- "All right, then, I'm taking them up this afternoon for their first jump."
- "Oh, I don't think I'd do that sir!"
- "Why not? I thought you taught them all they need to know about parachute safety?"
- "I did, sir, but they can't do any of it!"
Fuzzy instruction resulting from a "fuzzy" goal--he instruction may match the goals perfectly, but if the goal isn't appropriate to the performance and stated explicitly, the instruction will miss.
A second reason for stating goals as observable performances is that it helps the learners budget their resources. If they know they only need to learn to "list the five steps to parachute safety," they can muster a modest effort. On the other hand, if they know they will need to demonstrate those steps, they will budget additional time and effort to master the performance.
In the beginning of a project, you and your sponsor will almost certainly discuss the instructional goals in broad, fuzzy terms. But during the goal analysis phase, you'll want to assign specific performances to those goals, and state them explicitly. Later, in the objectives and test item writing stage, you'll use these same performances as a basis for assessing mastery.
In this section, we've adapted, with modifications, Mager's five-step method of goal analysis. The mnemonic "LACCE" may help you remember the five steps: (1) List the goals; (2) Assign performances; (3) Consolidate; (4) Complete statements; and (5) Evaluate. Here's how to do each step.
List the goals
Together with your sponsor, client, staff--anyone who may have a stake in the instructional goals of the project--make a comprehensive list of everything you think you want the learners to accomplish. Don't worry about how they're stated at this point, just get them all down on paper. These are what Mager refers to as "fuzzy" goals.
Awhile back one of our Educational Technology students worked on a training video for line workers in a tuna processing company. The company had several processing plants, and some of them were more efficient and successful than others. Their goal of the program was to help the less organized plants adopt the practices of their more successful counterparts, and to reduce training costs associated with high turnover of employees (would you want to clean tuna eight hours a day for the rest of your life?). They had multiple goals for the project. Their initial list might have looked something like this:
- The big picture
Sometimes people get a little long-winded and come out with phrases like, "We want that the quality of our products are consistent and of extremely high quality." That's fine. Just jot it down the way it comes out at first. The rest of the LACCE method will smooth out the bumps.
Here's where the content/performance matrix (Figure 5.1) helps you. For each "fuzzy" goal, list one or more performances that you would consider evidence of mastery. Do this with your sponsor or client, if possible. At the very least, submit the list to them for approval.
Refer to the content/performance matrix for ideas about appropriate performances. For our tuna canning example, above, the list might look something like this (with content/performance matrix categories at right):
Demonstrate safe knife handling
List and justify rules for walking instead of running and for cleaning up spills
Demonstrate appropriate dress including safe shoes, no jewelry, and no loose clothing
Demonstrate appropriate dress, including hair net, long sleeves, surgical gloves
List important rules for washing hands after using toilet, and for handling minor injuries
Identify and describe the characteristics of a quality product
Separate good and poor quality product based on the characteristics
Explain the importance of consistent product
Discriminate between acceptable and unacceptable products
Clean at least 3 tuna per minute
Notice that several of the goals involve subjective feelings, or "affective" states, rather than performances. You can't observe them, let alone measure them. That's OK. You can use what we call "indicator performances." Think of things people might do that would indicate to you that they were feeling or thinking this thing or that. For example:
Express empathy and support for coworkers
Recite company slogan
You might protest that reciting the company slogan, for example, does not really prove that a person takes pride in their work or in their role in the company. You could argue that the only way to really find out whether workers feel pride is through extensive interviews or even subtle psychological tests, or perhaps through indirect means such as tracking whether they take part in non-compulsory company activities.
The problem is that these alternatives are expensive to conduct. You have to balance the importance of a given goal against the cost of a specific indicator performance. Remembering the company slogan might not very accurately indicate feelings of pride, but not remembering might indicate that the individual is unaware of organizational values. That might be enough to satisfy you and your sponsor.
The next step in the instructional goal analysis process is to consolidate goals and performances that seem redundant. For example, in the tuna canning list, you might decide that "quality" and "consistency" are really two sides of the same coin. When you look at the performances, you realize that if people can discriminate quality product then they are also ensuring consistency in their work. You might consolidate the two goals like this:
Identify and describe the characteristics of a quality product
Separate good and poor quality product based on the characteristics
Explain the importance of consistent product quality
You can use the process of consolidating duplicate performances to further ensure that you're stating them in an "observable" way.
Now you're ready to write complete sentences that clearly and succinctly state what you and your sponsor want learners to be able to do following instruction. Each sentence should include "who" is doing "what" and how well they need to do it. For example, the "apply concept" performance for the "Quality" goal above might be written as:
"Cannery workers will be able to separate all the good and poor quality product based on the characteristics presented in the instruction."
Notice we've added the word "all." That qualifier makes the performance a little more specific than it was. Until we put that word in, we didn't know whether discriminating between good and poor quality product fifty per cent of the time is acceptable or whether more or less consistency is needed. Now we know it's important. We want them to separate all of them. If they miss one, they need more help. If they can, they pass the test. "How well" depends on the situation. Airline pilots "always" need to be able to land an aircraft. History students may only need to state three of the four causes of the American Civil War.
This step is a review--a final check to see whether your observable goals stand up to close scrutiny. Look over the set of performances for each goal, and ask yourself, "If the learner actually does this, am I willing to say he or she has met the goal?"
If you are, then the performance is a valid indicator. If you're not, then you should be back and revise that performance or rethink the goal itself.
Instructional goal analysis involves restating the broad, "fuzzy" goals (with which you and your sponsor or client inevitably started) as specific, observable performances. Most projects involve either remembering or applying four types of knowledge, summarized in the content/performance matrix (Figure 5.1). Following the five steps of instructional goal analysis results in clear goal statements specifying the performances you and your sponsor are willing to accept as indicating mastery of instructional goals.
Module 05: Apply
Test your knowledge by completing the quiz. The link is in the 'Content Modules' page in Blackboard, just below this Module 05 link.
Module 05: Reflect
Presenting Results of Your Analysis
Wherever you are in your Performance Analysis process, I want to make sure you have the opportunity to ask questions. So, I've established a performacne analysis coaching discussion board topic. One question that often comes up is this: "Is there one "best" way to gather data and present results?" The answer is no. It all depends on the initiating challenge, findings and envisioned solution. Presenting your results in a report is, in fact, an instructional design effort. Successful reports "educate" the reader. So, this too is an opportunity for you to think about your audience (me - or, if helpful, think supervisor) and your content (data analysis and reccomdnations) and bring it together into a successful report.
Feel free to visit the board to post your challenges and ideas, and I'll add my advice too.
Module 05: Extend
Overview of this section:
People in action
For more information
People in action
When we last left Roberto, the vice president was upset because performance rating forms were not being completed correctly at the retail stores, and that "some sort of training program" needs to be set up to "correct these errors." Rather than blindly accepted the word of the vice president, he convinced her that a goal analysis would be a better starting point.
They began by calling a meeting of six people; Roberto and the vice president, a store supervisor, and three managers from the camping, water sports, and biking departments from three different stores. Their first step was to list the goals. They decided to concentrate on two; the forms should be complete, and they should be turned in on time.
Assigning and consolidating performances turned out to be easy for the goal of turning reports in on time (a remembering procedure and applying procedure skill), but a bit harder for the goal of completing the forms. As they listed the behaviors, it turned out that this goal not only meant completing all empty spaces (remembering concept and applying procedure), but that the information included in the form needed to accurately reflect the actions, attitudes, and performances of the employee being evaluated (a remembering and applying principles skill).
Statements generated to identify if the goal had been achieved included the following. The department manager will: 1) provide the performance rating rubric to all employees within one week after hire; 2) discuss the components of the form with employees, and obtain the employee's signature that they understand the form and its significance; 3) rate the performance of the employee on each of the six categories in the 5-point evaluation rubric; 4) submit performance evaluations on February 1 and July 1 for all employees within the supervisor's department; and 5) schedule and conduct an individual meeting with each employee to discuss their evaluation within 15 days after the due dates.
The goal analysis process took the group most of one morning to complete. At the conclusion, the vice president, store manager, and department managers all felt that these performances would indicate that the managers would be accomplishing the original goal of "filling out these forms correctly." Questions remained, however, as to what sort of rubric would have to be generated to use during the evaluation, but all felt that a new form was needed rather than the previous one the company had been using.
Main points of Module 05
For more information
Clark, R. (1994). Developing technical training. Phoenix, AZ: Buzzards Bay Press.
Mager, R. (1984). Goal analysis Belmont, CA: Lake Publishing Company.
Merrill, M. D. (1971). Necessary psychological conditions for defining instructional outcomes. In M. D. Merrill (Ed.), Instructional design: Readings. Englewood Cliffs, N.J.: Prentice Hall.
Merrill, M.D. (1986). Component display theory. In C. M. Reigeluth (Ed.), Instructional-design theories and models: An overview of their current status. Hills dale, N.J.: Lawrence Erlbaum Associates
Introduction Connect Apply Reflect Extend
Page authors: Bob Hoffman & Donn Ritchie Last updated, Marshall: Spring, 2006.
All rights reserved