ADDIE- An introduction

Introduction Connect Apply Reflect Extend

A systematic approach to developing instruction

Systematic means looking at the whole as more than a collection of its parts. It suggests the importance of considering the complex interrelationships among parts instead of viewing each part in isolation. The idea of a system also suggests purpose. Ignition systems, health insurance systems, color systems, systems of government, all derive their importance from their utility in automobiles, in society, or in a some particular enterprise.

So it is with instructional development. Someone, a manager, a superior officer, a principal, the community at large, or you yourself have identified a need for some instruction. Let's say you're developing a driving unit. It could be a driver education unit for a high school, a submarine driving unit for the Navy, or a forklift driving unit for your company. You could develop your instruction casually, starting, say, by drawing some diagrams of the automobile/submarine/forklift dashboard with all the dials and gauges. There's a high risk that you might discover later, however, that the diagram isn't really needed, or that it doesn't have just the right features or labels, or that it includes too much information for the learners. In short, it will cost time and effort to fix it.

This rather pedestrian example demonstrates the need for a system of instructional design.  It's purpose is to rationalize and streamline the instructional development process. It helps educational technologists account for important characteristics of a given situation, like the reading level of the learners. It guides decision making for features such as instructional strategies, media, and beyond.

Whether you're an instructional designer, teacher, performance coach or even a project manager, the instructional design system has value.  It is a way of thinking, of planning and of increases the chances that any developed solution will result in measurable improvement. 

One of the best known systematic approaches to instructional design is called by its acronym, ADDIE, the topic of this module.  Your assignments for EDTEC 540 will require you to follow the ADDIE process to analyze a performance problem, then design and develop a product. More on that at the end of this module.


ADDIE- Connect

In this section:

* Introduction
* Analysis
* Design
* Development
* Implementation
* Evaluation
* Summary

 

Introduction

ADDIE stands for Analysis, Design, Development, Implementation, and Evaluation. These are the main stages in the creation of instructional products and programs. ADDIE itself has gone through some maturation. In its earliest conceptualizations, the stages were seen as distinct steps in a linear process. Following the ADDIE model meant beginning with analysis, then, based on the results of the analysis, designing the product or program, then developing it, and finally implementing and evaluating it.

This so-called "waterfall" approach, in which one stage gives way inexorably to the next, proved increasingly inefficient as instructional products and programs grew in complexity and cost. Mistakes in the early phases, such as analysis or design, might not show up until the very end of the process. By then, resources were expended and it was too late to fix the instruction. As a result, educational technologists began making ADDIE more of a research and development model. Instead of waiting until the end of the project to evaluate, they adopted techniques such as rapid prototyping that provide early feedback on problematic issues. Formative, in-process, evaluation is now integral to every phase of ADDIE.


Moreover, educational technologists view the entire ADDIE process as iterative, that is, none of the phases are really complete until the entire project is finished. For example, developers who find a problem in an instructional sequence might send it back to the design phase or even for further analysis. A modified representation of ADDIE might look something like this, based on Ritchie's model (Figure 1).


Figure 1. The iterative ADDIE model (after Ritchie, 1992).

Let's look at each phase of the ADDIE model to get an overview of the instructional design process. Ensuing modules will more closely examine each of these phases.  

Take note: EDTEC 540 involves problem- based learning.  The assignments you will complete for this class are grounded in a performance problem/challenge of your choosing.  Your successful analysis of the chosen situation, and subsequent design and development of a performance support tool (job aid), will require 'walking the talk' as you work through each phase of the ADDIE model.


Analysis

The analysis phase of the ADDIE process entails two major steps, each one consisting of several activities:

Performance Analysis (PA)
* front-end analysis
*
cause analysis

Training Needs Assessment (TNA)
* audience analysis*
* instructional goal analysis
* subject matter or task analysis


As with the rest of the ADDIE model, educational technologists don't necessarily perform these activities sequentially. We do, however, almost always begin with performance analysis before we engage in training needs assessment.

Performance Analysis
The purpose of performance analysis (PA) is to determine the necessary mix of actions--the solution system--that will foster people's ability to accomplish their goals and the goals of their organization. According to Rossett (1987), to arrive at an appropriate solution system, we want to know:

  1. What is the current performance? 
  2. What is the desired performance?  
  3. What will enable or hinder people attaining the desired performance? 
The answers to the last question usually fall into four broad categories. To successfully perform people need:
  1. motivation;
  2. skills and knowledge;
  3. appropriate tools and environment; and
  4. incentive, or organizational support.
The solution system consists of specific actions to develop each of these conditions. Only after it is certain that a training, coaching or information solution is appropriate (which we find out through PA) is the investment in more substantial training needs assessment made. In other words, performance analysis is a precursor to the elaborated and developmental planning involved in a training needs assessment associated with the production of a particular solution, like a class, a series of job aids, or multimedia program.

Educational technologists conducting performance analysis do so by gathering information from a variety of sources. For information on the desired performances, we turn to organization executives or administrators, as well as subject matter experts, customers, benchmark organizations, and the literature in the field. For information on the current performance, we look to managers or supervisors, employees, customers, and to extant data, such as accident reports, standardized test scores, company policies, and year end reports. For data on what will enable or hinder the desired performance, we tap subject matter experts, managers or supervisors, employees, benchmark organizations, and the literature. No one source has all the answers. No answer is reliable until it is confirmed by other sources.

To get information from these sources, educational technologists use interviews, surveys, direct observation, literature reviews, focus groups, and other tools. The results of performance analysis, including the recommended solution system, takes the form of a report to the sponsor.

Training Needs Assessment
When the list of performance needs includes enhancing skills and knowledge or improving motivation, specific solutions that frequently recommend themselves are education, training, and/or information support. When that is the case, educational technologists go on to  understand the learners in more detail and articulate the subject matter or tasks involved.

The analysis phase of the ADDIE process includes audience analysis. During performance analysis we focus broadly on skills and knowledge as well as motivation of the performers. Once we have established that training is part of the solution system we need to be more specific about the relevant characteristics of the learners. Useful information includes a number of factors. Learners' existing skills and knowledge, often termed "prior knowledge," is important to know for two reasons. First, your instruction can help learners build on what they already know or know how to do. Second, you can avoid spending resources teaching people what they already know and do.

Assessing learners' attitudes towards the instructional content is also useful. Are they motivated to learn this content, or will you need to foster motivation? Learners' ability levels are also important, particularly with respect to their stage of cognitive development (e.g., are they concrete or abstract thinkers) and their reading ability (e.g., 4th or 8th grade, and in what language).

What are the learners' preferences with respect to modes of learning and delivery systems? Are they used to seminars, lecture/demonstrations, cooperative groups, or hands-on workshops? Are they comfortable with textbooks, video, computer-based instruction? Do they prefer teacher-centered or learner-centered instruction? How easily will they adapt to new methods or media?

Subject matter or task analysis is also part of the analysis stage of the ADDIE model. These activities enable the educational technologist to identify the specific skills and knowledge learners need in order to accomplish the instructional goals. There are several methods of task analysis, depending on the type of content. When the instruction deals mainly with facts or concepts, you might opt for a simple outline featuring main topics, sub topics, and so forth as needed. A flow chart may work best when processes, procedures, or principles make up the bulk of the content. Making a flow chart involves identifying and sequencing the steps or stages of the procedure or process, noting any subprocesses or skills involved, and adding concepts that the learners need to accomplish each of the main steps. Finally, it's important to distinguish what the learners already know about the task, what they might need to review, and what will be entirely new to them.

As mentioned above, you may revisit the analysis phase of the ADDIE process from time to time during the course of a project. For example, you might obtain new information about the learners' prior knowledge that prompts you to reconsider the priorities within the task analysis. Or you might develop a new perspective with respect to the level of motivation and want to go back to the performance analysis to reconfigure the solution system.

In these times of rapidly changing technology and access to information, it is not unlikely that your initial assessment of your learners will become outdated over the course of a longer project. For example, in early 1995, the majority of students entering our educational technology masters degree program had little experience with the World Wide Web. By the end of the year, however, the situation had changed significantly. We could assume that most students by then had at least been exposed to the Web, and many used it routinely. By the summer of 1996, growing numbers of our students had access to the Web from their home computers. This is also in contrast to the relatively small number of incoming students who even had a home computer in the late 1980s. Our learners had changed, and we needed to change our program accordingly.


Design

The design phase of the ADDIE process involves four activities: drafting instructional objectives, drafting test items for measuring performance, specifying instructional strategies, and selecting media.
Instructional objectives are the focal point of much of what we call education and training. They serve to keep designers focused on the instructional goals and they help learners figure out what they're supposed to be learning so they can pay attention to the right stuff. Instructional objectives, according to Mager (1984), consist of three elements: (1) the performance, or what the learner must do to demonstrate mastery of the objective; (2) the conditions under which they will perform; and (3) the criteria for recognizing a successful performance.


For example, one of the objectives for this chapter might be:


" Given the ADDIE acronym, educational technology students will be able to verbally state the phase represented by each letter."


In this example, the verb "verbally state" represents the performance element--that's what the students will actually do to demonstrate their mastery of the objectives. The condition is "Given the ADDIE acronym." That's important, because it specifies that we're not expecting the students to be able to recall the acronym itself. We don't have to expend resources designing and implementing instruction and practice to help them remember it, and the students themselves know they don't have to spend time and energy memorizing it. It will be given. The third element in this example objective, the criteria, is the word "each." That means that in order to demonstrate mastery of this objective, the students must name all five phases of the ADDIE process. Four will not be adequate. Naming the first phase "Assessment" instead of "Analysis" won't do.


You may be asking yourself, "What's the difference between instructional goals and instructional objectives? They both describe some performance designed to convince you that the learner has mastered the skill or knowledge." Good question. In fact, sometimes they are pretty much the same, especially for small bits of instruction. But in the real world, where you are often working on comparatively large projects, there are two differences between goals and objectives.

First, goals are bigger, more encompassing, and, in spite of Mager, may be admittedly "fuzzy." Take the California History-Social Science Framework, a book published by the State of California outlining the goals and curriculum strands for K-12 education. One broad category of goals within the "knowledge and cultural understanding" strand, for example, is entitled "Historical Literacy." It, in turn, is made up of sub goals such as "Students must understand the meaning of time and chronology." That is further explained, in part, by the statement that the "Children must learn the meaning of terms such as decade, generation, century, and so on." These statements are all what a purist might call "fuzzy"--they lack an observable performance--and, except for the last example, are pretty broad. After all, an astronomer, a anthropologist, and a social historian might have very different concepts of "the meaning of time and chronology."

In spite of this, the social studies teachers in California are not entirely on their own with respect to instructional objectives. The statement that Children must learn the meaning of terms such as decade, generation, century, and so on" lends itself to transformation into a number of possible objectives, depending on how you interpret the term "learn." For example, one teacher might decide that:


" Without looking in their books, children will be able to define the terms decade, generation, and century."

Another might specify that:

" Given a topic about California history, children will be able to use the terms decade, generation, and century in a paper."

A second, technical distinction between goals and objectives, in Mager's terms, is that goals do not necessarily have conditions and criteria, as do objectives. Objectives, in this sense, are somewhat more specific than goals in that they specify more about the performance.

In any case, the design phase of the ADDIE process involves taking the instructional goals and transforming them or breaking them down into specific instructional objectives, based on the knowledge of your learners and the subject matter that you obtained by conducting the audience and task analyses during the analysis phase. The key is for your objectives to reflect the real outcomes you and your sponsor are seeking. Why make people memorize the definition of a blunget when you really want them to be able to use one? Get clear about what people really need.


One of the features of the ADDIE model that sometimes surprises new educational technologists is the idea that test items can and should be prepared before the instruction is developed, and even before the instructional strategies are chosen. But if you think about it for a minute, it makes perfect sense.

Once you have drafted instructional objectives, you know exactly what the learners must do to demonstrate mastery. That makes it relatively straightforward to translate the objective's performance into a test item. Using the example above, the social studies teacher might write a test item that requires students to "Define the term decade."

Now you're in a position to design instruction that will help students do exactly that. If this smacks of "teaching to the test," well, it is! The pejorative connotation of "teaching to the test" really has more to do with the quality of the test itself. If you teach to a poorly designed test, you'll be giving poor instruction. But teaching to a good test, a realistic assessment of useful skills and knowledge, is the only fair, effective course. After all, how would you like someone to ask you to demonstrate your mastery of some knowledge or a skill they never taught you? Drafting test items before developing the instruction helps you guard against this pitfall.

Another part of the design phase of the ADDIE model is specifying instructional strategies. People learn different kinds of content in different ways. For example, if you wanted to understand the concept of a tranboard, you might want to see several examples of tranboards, see a definition of a tranboard, including its component parts and function, and perhaps practice discriminating between tranboards and those deceptively similar clontacks.

On the other hand, if you wanted to learn the procedure for loading and releasing a clontack, you might find a step-by-step demonstration, a diagram with all the relevant parts labeled, and some hands-on practice loading and releasing a dummy clontack more valuable than seeing a definition and examples.

Specifying instructional strategies comes down to two tasks: (1) "chunking" and categorizing content according to the type of knowledge; and (2) assigning appropriate learning materials and activities for each chunk. A "chunk" is the smallest unit of instruction you plan to develop. The size of a chunk depends on the levels of ability and prior knowledge of your learners (which you understand, having conducted an audience analysis) and the level of detail of the content (which you know from your subject matter/task analysis).

Another task that falls under the heading of design is selecting appropriate media with which to deliver the instruction. This involves several considerations including "what's available?" "what do the learners like to use?" and "what will it cost?" Some of the possibilities for delivering instruction include: print media, in the form of manuals, job aids, textbooks, workbooks, and workshop handouts; electronic media, which in turn range from narrowcast media such as videotape, audio tape, and CD-ROM, to broadcast media such as the World Wide Web, television and radio; and face-to-face instruction. In most instances, you'll use a mix of several of these media to deliver your instruction.

Educational technologists pay careful attention to which media best provide the appropriate representation for the content and audience at an acceptable cost. Reading pages upon pages of text on video or even computer screens is unlikely to be a big hit. Demonstrating a procedure in print can be less than illuminating. Color photographs are pretty, but what will they cost to print and will color really add instructional value in this instance? These are the kinds of considerations involved in media selection.
Should you finish writing your instructional objectives and test items and specifying your instructional strategies before beginning the development phase of ADDIE? Not necessarily. Perhaps you are not quite sure about the size of "chunks" you need, or you want to try an instructional strategy that you're not positive is going to work. It might be wise to begin development with construction of some small, inexpensive prototypes designed to help you answer these questions before you commit significant resources into full scale development. This is known as rapid prototyping, and it can feed valuable data back into the design phase.

Development

The development phase of the ADDIE process is as varied as the media you have selected, the instructional strategies you have specified, and the content you have outlined. This is where you turn your design into interactive multimedia, workbooks, Web pages, and stand-up training. The content is fleshed out based on the instructional strategies, and produced for the selected media.

There are three broad types of development processes, based on the type of delivery method you've selected: print, electronic, or face-to-face training. As an educational technologist, you may be involved in one or all of these in a given project. Developing print media such as textbooks, workbooks, job aids, and so forth, usually involves a team which includes instructional designers, writers, copy editors, picture editors, graphic artists, photographers, page layout artists, proof readers, and press people. Depending on the size of your organization and the nature of the project, there may be additional specialists or some of these roles may be combined in a single individual. A classroom teacher might do all these tasks herself. An instructional designer in an educational multimedia production company might just draft lessons day in and day out.

Electronic delivery systems, such as video and multimedia, require a somewhat different development process, though the system as a whole is analogous to the development of print media. Again, nearly all projects involve a team which may include instructional designers, producers, directors, story board artists, script writers, computer graphic and animation artists, videographers, sound technicians, lighting technicians, actors, video editors, computer programmers, program evaluators, and duplication and distribution people. Many of these roles may be combined in a single individual for a small project, and many additional people may be involved in a large project. The role the educational technologist can play ranges from writing lessons to supervising or managing the entire project.

Developing stand-up training typically involves some of the same print and electronic media processes in addition to preparing the trainers themselves. Let's say your organization is putting together a workshop on leadership for company managers, school administrators, or military officers. Let's further assume that you need to offer the workshop in a number of locations simultaneously so that several trainers will be involved. There's an accompanying workbook that gets developed in much the same way any print material is produced. There's also a video and some computer overheads, and those are developed pretty much like other electronic media. The team that develops the workshop itself is made up mainly of instructional designers and writers (who may be one and the same). In addition to generating the workshop materials themselves, they create trainer's materials, develop schedules to include breaks and administrative functions like introductions and workshop evaluation, and so on.

As you can imagine, almost every project you'll work on involves some mix of these distinctive development processes, with the possible exception of print-only projects. Rarely does multimedia training go out the door without accompanying print material. Rarely do stand-up trainers go out on the road without both print and electronic support materials. The educational technologist comfortable with each of these development processes is in a good position to design for and manage interesting projects.

As mentioned earlier, it's important to remember that evaluation plays an important role throughout the development process. Trying out small portions of the instruction with actual learners before committing to full scale development is the only responsible course. Testing larger chunks to see that learners can use the instruction to master the objectives is also important. Thoroughly testing each computer program before it is released preempts costly redistribution and preserves your organization's reputation for "doing it right."

Implementation

Implementation means delivering the instruction to the learners. It means distributing the textbooks, videotapes, workbooks, and/or login information It means linking the Web site or broadcasting the lessons. It means scheduling the workshops and courses and sending out the trainers (or bringing in the participants).

In addition to stand-up training skills for workshops and courses, educational technologists need good administrative skills when it comes to implementation. We need to be organized, able to plan for both foreseeable and unforeseeable contingencies, and flexible to adapt to changing circumstances.

There are a number of issues involved in implementation. For print-based instruction, how will it be duplicated and distributed? To whom will it go? How will you know if anyone is using it? Where will it live once the primary user is finished with it? How will you know whether it worked? How will you make sure all copies are updated when you make the inevitable revisions? Is there proprietary information in the documents that requires some degree of security? Who will be responsible for maintaining that security? In large organizations these can be significant issues.

For electronic instruction, most of the same issues apply, along with a few more. Will the software work on all platforms within your organization? If not, how will learners access the hardware and software they need? Where will learners view videotapes? What are the minimum specifications for the viewing location? Who will provide the help line for software problems? Who will staff the help line?

While print-based and electronic instruction are designed to stand alone for the most part, stand-up training represents perhaps the greatest challenge in the implementation phase. Who will coordinate scheduling of facilities, trainers, and participants? Will training take place in a central facility or "out in the field?" How will trainers and/or participants get from one place to another? How will the workshop materials get to the site? Will the trainers need specific equipment? Will they need technical support on site? Will they need meals and lodging?

The implementation phase of the ADDIE process is inextricably interwoven with evaluation. As learners receive the instruction, you'll want to know whether they like it, whether they're learning from it, whether they're using it, and whether it's making any difference. If the program proves weak in any of these areas, you'll want to go back, perhaps all the way back to performance analysis in some cases, to revise your approach.


Evaluation

You've already noticed that evaluation is integral to every phase of the ADDIE model. Deferring evaluation to the end of a project is ill advised, since by that time there is not much you can do about the inevitable weaknesses you uncover. In general, there are two types of evaluation, formative and summative. As their names imply, we conduct formative evaluation during the analysis, design, and development stages of the project, and summative evaluation during or after product or program implementation.

Formative evaluation varies depending on the type of instruction you're creating. For instructional products--either print or electronic--you can apply the rapid prototyping method, coupled with usability and learn ability testing. Prototypes are partial realizations of end products. Prototypes can take the form of outlines, story boards, mockups, simulations, or simply preliminary versions of the product with partial implementation of either features, content, or both. Software developers often have quality control departments that specialize in usability testing, as well as software bug-testing. Instructional developers need to test learn ability as well as usability, for both print-based and electronic instruction.

Usability is a test to find out how easy it is to use your product. To test the usability of a product, you need a few (anywhere from two to a dozen, depending on specific need and availability) individuals unfamiliar with your product's development. It's vital that they be similar in important ways to the product's target audience. For example, you probably wouldn't want college students to test the navigation system of an educational software game for 2nd grade students. You'll need some real 2nd graders to do that, because they have different cognitive characteristics than older students.

You ask your "subjects" to use the product prototype for a short time. You might observe them, ask them to "think aloud" as they use the product, ask them to predict what they think will happen when they take some action (like clicking on the "next" button of a software program), ask them questions, and/or debrief them when they've finished using the product.

You can use a parallel approach for learn ability. Learn ability is a test to see whether the instructional strategies are succeeding. Again, let your learners interact with the product. See whether your learners are using the resources you've provided, or if they need additional ones. Give them a little pre- and posttest to find out what they've learned. Ask them whether they enjoyed the instruction and whether they felt they learned from it. Ask their suggestions for improvement of the product. You may be able to conduct both usability and learn ability tests simultaneously to save time.

Formative evaluation for instructional program development is usually a little different, since there are few, if any, "products" to prototype and test for usability and learn ability. In the case of school classrooms and other ongoing training programs, new lessons or units can be piloted as parts of larger programs. Plugging a new module into an existing curriculum for awhile can give you some idea of how well the new instruction will work. Entirely new workshops or courses may need to be piloted with small groups with intensive evaluation and revision before you distribute them widely.

Summative evaluation is aimed more at demonstrating to sponsors how effective a finished instructional product or program is. Specific purposes range from accountability to funding sources to providing a model for dissemination to justifying further resource commitment. The tools of summative evaluation include both quantitative and qualitative methods. Quantitative methods include standardized and custom-made achievement tests, attitudinal surveys and questionnaires, and unobtrusive data gathering on performance measures such as productivity, accuracy, or sales. Qualitative methods include case studies, peer or supervisory reviews, and so forth.

If formative evaluation is well planned and executed and the results are used to adjust the design and development of the product or program, the results of summative evaluation should be a foregone conclusion. Attention to formative evaluation at every stage of the ADDIE model is a key to success.

Summary

ADDIE is not the only incarnation of Instructional Systems Design, and there are many variations on ADDIE itself. If you ask a dozen educational technologists whether they use ADDIE in their organization you'll get a range of answers from "No, we use the 'X' system instead," to "Well, yes, but we don't call it that," to "Um, sort of, but we modify it a little," to "Yes, it's just like we learned it in graduate school," to "We do, but we never do enough ..." (fill in your favorite ADDIE stage--usually analysis or evaluation). If you ask any of these folks exactly what they do, however, you'll usually discover that much of ADDIE is there in one form or another. Every organization has its own "take" on ISD, or its own peculiar practices or habits, but any organization that doesn't use a systematic approach to instructional systems design isn't usually around long enough to want to hire you.



ADDIE- Apply


We have created a short "quiz" using the Blackboard testing tool.  This quiz can be taken to test your knowledge of the Module 03 content.  This quiz is not graded, nor will your performance be reviewed by your instructor.  You will find the quiz linked under Content Modules in Blackboard.


ADDIE- Reflect

As mentioned at the beginning of this module, our semester assignments are designed to generally match the ADDIE process.  Using the topic you select, you will:

Success on these projects begins by selecting an appropriate, right-sized topic.  Reflect on the assignment requirements and consider topics that will yield fruitful content to meet these requirements.  When an idea comes to mind, review it using the Selecting a Topic job aid posted under the Assignments tab in Blackboard.  Then, please consider sharing it in our Discussion Board forum. Doing so serves two purposes:
  1. Your classmates will be able to see what you're up to - and reviewing your topic might just bring new ideas to those who haven't defined their topic yet.
  2. I'll be able to provide feedback about the topic and its applicability to EDTEC 540.
I suggest that you define your topic, at least initially, in the next two weeks.  



ADDIE - Extend

 
Overview of this section:

* People in action
* Main points
* Next step
* For more information

People in action


As Barbara looks over the steps involved in the ADDIE process, she's surprised how similar it looks like an activity she completed last year. "It's basically the same as what I did when I submitted the Eisenhower Grant proposal. We documented that our students needed better technologies for our space science curriculum, designed goals and objectives that were to be accomplished, told how we would design curriculum materials to support these needs, and discussed how the material would be implemented throughout the middle school curriculum. The only thing we included that isn't in ADDIE was a budget. What I see that we missed was the evaluation component. Maybe we can write that in, at both the formative and summative level, and reapply next year."


Roberto's comments on the ADDIE system were also enlightening. "The very first product I created in this position was a short video on orienteering, you know, using a compass and top of a map to guide you over the terrain. It was to be shown at our outlet stores. It was a low budget production, and I never thought to do a training needs assessment. Not only do we not carry the compasses we showcased, but people who use these skills said we were teaching some of the wrong facts. And it was over the heads of the casual observer. I guess that's why it wasn't used at many of the stores for more than a week or two. Even though low budget, it still turned out to be an expensive mistake since it's not being used. That episode helped reinforce the need to do a good analysis."

Main points 


Next step
In upcoming modules we'll explore each phase of ADDIE in more depth, enough to help you construct a basic, working knowledge of each stage in the process. Because performance analysis and training needs assessment are conducted first (or at least they should be conducted first) we'll examine the Analysis phase of the ADDIE model first. 

 
For more information
Mager, R. (1984a). Preparing instructional objectives. Belmont, CA: David S. Lake Publishers.
Mager, R. (1984b). Goal analysis Belmont, CA: Lake Publishing Company.

Introduction Connect Apply Reflect Extend

Page authors: Bob Hoffman & Donn Ritchie & James Marshall Last updated: Spring, 2006

All rights reserved