Wednesday, March 7, 2018

Assessment timing and purpose

Last week we discussed assessments. Let's review -

3 of the most common types of assessments are diagnostic, formative, and evaluative:
  • Diagnostic assessments are typically used before training to determine a gap or knowledge level. 
  • Formative assessments are typically used during training to determine a gap or knowledge level.
  • Summative assessments are typically used at the end of training to determine the level of knowledge or understanding.
Notice one of the biggest differences is how and when the assessment is used? Now let's consider a real-world example. Before you can take your first college-level class, you will probably need to take a placement exam of some sort. This is a Diagnostic assessment that can determine your knowledge level and whether you will be able to perform at the college level. Once you begin taking the class, there will be assessments throughout the course. They may be in the form of quizzes, assignments, or other projects. These are Formative assessments, which measure as you progress through the course. Once you reach the end of the course, there will likely be a final exam or project, which is a Summative assessment.

Now let's consider the timing and potential use of each type of assessment.

Diagnostic assessments are typically done before training begins. This can be used for several purposes, such as getting a baseline for reference on the student's progress. It can be used to ensure the student is capable of performing at an adequate level to keep up with the course materials. It can also be used as a needs analysis so that the training can be customized to focus on gaps or deficiencies.

Formative assessments are typically done during training and can be thought of as a progress report. When training in chunked into smaller pieces, formative assessment can evaluate how well the student learned the information in that chunk. This can be used for a lot of different purposes as well. Needs analysis is a common use for this type of assessment, but training feedback is another. This allows the trainer to see how effective the training is, and make adjustments to the material or delivery. This can also be indicative of other problems that may require intervention, such as a learning disability or other barriers to learning the material.

Summative assessments are done when the training is complete and are typically used to see how well the student has learned the material now that the training is over. This is usually a comprehensive assessment that covers the entire scope of the class, and not just the individual chunks that the formative assessments use. This is often at a higher level of Bloom's taxonomy than formative assessments, as the learner will often have to demonstrate a skill or application of the information.

Wednesday, February 28, 2018

Selecting Appropriate Assessments (training alignment part 2)

Last week we discussed the need for learning activities to align with learning objectives, and some approaches to accomplish that. This week, we will take a look at assessments, which should also be congruent with the learning objectives. First, let's clarify the purpose and types of assessments. 3 of the most common types of assessments are diagnostic, formative, and evaluative:
  • Diagnostic assessments are typically used before training to determine a gap or knowledge level. 
  • Formative assessments are typically used during training to determine a gap or knowledge level.
  • Summative assessments are typically used at the end of training to determine the level of knowledge or understanding.
Notice one of the biggest differences is how and when the assessment is used? Now let's consider a real-world example. Before you can take your first college-level class, you will probably need to take a placement exam of some sort. This is a Diagnostic assessment that can determine your knowledge level and whether you will be able to perform at the college level. Once you begin taking the class, there will be assessments throughout the course. They may be in the form of quizzes, assignments, or other projects. These are Formative assessments, which measure as you progress through the course. Once you reach the end of the course, there will likely be a final exam or project, which is a Summative assessment.

There are several other types of assessments, but these are the most common ones used in most situations. There are a lot of different methods of assessment, including:
  • Tests/quizzes
  • Observation
  • Essays
  • Interviews
  • Performance tasks
  • Exhibitions and demonstrations

So now, how do we determine what kind of assessment is appropriate, and make sure it is aligned with the learning objective?

The method I normally use involves Bloom's Taxonomy. Bloom created a hierarchy of learning that classifies and sorts learning outcomes. This is useful in determining learning objectives, and I find it also very useful in determining assessments as well.

http://grade4gate.weebly.com/uploads/2/3/3/9/23390928/8762564_orig.jpg

Let's consider an example from the bottom of the hierarchy. A training module may simply focus on the student learning vocabulary or terminology. The primary objective of this learning would be remembering, or memorization. Therefore, an assessment that would be appropriate might be a quiz testing the student's recall of the material.

However, if a learning outcome is problem-solving or troubleshooting ability, then the level of this taxonomy might be analysing. An appropriate assessment of that skill wouldn't be a memory test like we used for the previous example, which wouldn't measure the learning outcome. A better assessment would be giving the learner a problem that they had to analyze or deconstruct. There are many versions of this taxonomy and while the levels are the same, some charts have different verbs that can be very useful in identifying the skills for the objective and useful methods of assessment. For example, here are some descriptive verbs from a different chart:

Remembering
  • recognizing (identifying)
  • recalling (retrieving)

Understanding
  • interpreting (clarifying, paraphrasing, representing, translating)
  • exemplifying (illustrating, instantiating)
  • classifying (categorizing, subsuming)
  • summarizing (abstracting, generalizing)
  • inferring (concluding, extrapolating, interpolating, predicting)
  • comparing (contrasting, mapping, matching)
  • explaining (constructing models)
   
Applying
  • executing (carrying out)
  • implementing (using)

Analyzing
  • differentiating (discriminating, distinguishing, focusing, selecting)
  • organizing (finding, coherence, integrating, outlining, parsing, structuring)
  • attributing (deconstructing)
   
Evaluating
  • checking (coordinating, detecting, monitoring, testing)
  • critiquing (judging)
   
Creating
  • generating (hypothesizing)
  • planning (designing)
  • producing (construct)


By keeping the learning objectives at the forefront of the design process and trying to align all of the training components with them, we can ensure that we are measuring learning progress and success against the desired outcomes.

Wednesday, February 21, 2018

Aligning learning activities with learning objectives

The instructional design phase can be quite complicated. There are numerous options for the training type, delivery methods, and learning activities. While the analysis step in the ADDIE model may greatly narrow choices of training type and delivery methods, how does a T+D professional determine appropriate learning activities? After all, it just makes sense that learning activities should support measurable objectives. Here is one way to accomplish this.

First, we must break down the training. We should have clearly identified the objectives or outcomes in the analysis step. In the design step, we need to start identifying the individual tasks (or dependencies) required to reach that outcome. Some situations such as making a PB&J sandwich may only have one task- the act of making the sandwich.  More complex situations such as making a grilled chicken sandwich may have additional tasks required to support the outcome, such as grilling the chicken before making the sandwich. There may also be situations with multiple objectives, and we must identify all the tasks required to achieve these objectives.

Once those tasks are identified, we will create a process map of each one. If you are not familiar with process maps, they are a step-by-step diagram or flowchart that shows the activities needed to complete a process. While used more frequently in Lean Six Sigma process improvement projects, these diagrams are also perfect for breaking down a process into individual steps for instructional design. There are many different variations of this technique like using swim lanes or using different blocks for different kinds of steps, but for our purposes, we will keep it simple. Below is a very basic example of a process map from LucidChart that outlines the steps to perform process mapping.

https://www.lucidchart.com/pages/examples/process-map/how-to-process-map-template




Note that each step in the process is typically listed in a rectangular box, and each decision point is a diamond. This kind of diagramming can be done in lots of different ways - software, pen and paper, or whiteboard, but my preferred method is using Post-It notes. Post-It notes make it easy to re-order steps, and turning them diagonally will give you the diamond decision point. You can also use different colored ones to distinguish between steps that are deliverables, manual actions, and automated actions. Below is an example of what that will look like when completed.


Once these steps are broken down, review the entire task and make sure it supports the outcome. If it does support the objective, then move to the next one. This process is repeated until you have identified all tasks or dependencies required to achieve the objectives, and all steps required for each task.

Now we have a great breakdown of the tasks and their steps but still haven't identified what kind of learning activities we can use. To do that, we will perform a similar process, but this time we will work backward using a technique called "Action Mapping". Developed by Cathy Moore a decade ago, action mapping is designed to make training more efficient and better aligned with outcomes. It does this by identifying activities to allow the student to practice the actions and ensuring the information is available that will be required to get the desired results.

The four-step action mapping process includes:
  1.     Identify the business goal.
  2.     Identify what people need to do to reach that goal.
  3.     Design activities that help people practice each behavior.
  4.     Identify the minimum information people need to complete each activity.
We have already completed the first 2 steps, so we will move immediately to the third step. You can start with the business goal, but since we have already identified the dependencies (the tasks to achieve the goal) then that is a good starting point. Start with the task in the center and list the steps and decision points, surrounding the task. Once that is completed, revisit each step and decision point, answering 2 questions:
  • What activity can we use to allow people to practice this behavior or action?
  • What information do they need to perform this?
The answers to those two questions should be noted at each step, building outward. Continue around the steps, creating the mapping with this additional information. Don't be alarmed if every step doesn't have an activity or required information, or if multiple steps will have the same answers - that is normal.

When determining activities for learners, you should also consider how it can be measured. Since the activity should be directly related to a measurable outcome, the activity can likely be measured as well. That means that meeting the measurable standard for the activity will greatly increase the probability of achieving successful outcomes. For example, if an objective is for a learner to be able to assemble 10 widgets per hour with zero defects and zero waste, then those measurable components should translate to activities where the learner may practice assembling widget parts in a training scenario until they can meet or exceed those metrics. This supports training at a higher level per Bloom's Taxonomy, where a learner doesn't just understand and/or remember training but is actually developing a skill where they can apply the learning.

Once you have completed this process for each step, repeat the action mapping for the other tasks or dependencies. When you have completed the iterative process, you will now have a detailed list of tasks, process steps for each task, activities that will allow the learner to develop their skills, and information/resources that are required. With these components identified, you now have what you need to create a thoughtful, holistic training strategy and instructional design that directly supports the objectives.

Thursday, February 15, 2018

ADDIE - The Analysis Phase




The first step of the ADDIE process is analysis. The analysis step identifies the performance or knowledge gap and the desired goal. It also examines factors such as the learning environment and the learner's current knowledge and skill level. This part of the ADDIE process can be one of the most time-consuming but is also one of the most important. This step influences all of the remaining stages of the ADDIE model, so if this analysis isn't done correctly you have a greater risk of not achieving success.

Some of the general questions that must be answered during the analysis include:
  • What is the problem or performance gap? 
  • Is training actually the appropriate intervention to address this problem?
  • Who is the audience and what are their characteristics?
  • Who are the stakeholders or sponsors?
  • What is the desired new behavior and performance, and how will it be measured?
  • What does success look like?
  • What types of resources are available?
  • What types of constraints exist?
  • What are the delivery options for training?
  • What adult learning theory considerations apply?
  • What is the timeline for project completion?
Ideally, this step will clearly define the problem or gap, measurable outcomes, constraints, resources, and any other factors that will influence the project. Once that is completed, a strategy can be developed that will set up each of the remaining stages for success.



Sunday, February 4, 2018

The A.D.D.I.E. Instructional Design Model


A.D.D.I.E.  is a common model of instructional design that consists of 5 basic steps. This model is linear, so steps are followed in order until all steps have been completed. However, some practitioners may go back and repeat steps if needed; and the entire model may be repeated in multiple iterations if the desired outcomes are not met or requirements change.


The first step is Analysis. In this stage, you must analyze the situation, usually beginning with the performance or knowledge gap. This is a critical step - before you start designing training, you must first identify the issue and determine that training is actually the correct solution to address the issue. Although many people default to training as an answer for all performance gaps, the reality is that there are a lot of other things that affect performance that are not training-related such as culture, environmental conditions, attitudes, and many other factors. Those are beyond the scope of this article, so we will explore those in a future post. In evaluating the performance gap, an important element is to identify the desired performance level, and how it will be measured. If the employee builds widgets, what does the desired performance look like? Although the default assumption is that they know how to build a widget after they complete the training, is there a productivity or time element? For example, is the training  successful if they can build a widget, but it takes them an entire shift to do it, or do they need to build a certain amount per hour? What about a quality element? Can they meet the aforementioned throughput rate with with any level of quality, or should they produce fewer than a certain number of quality defects? What about cost? Do they need to meet the throughput and quality goals while keeping waste below a certain amount? You can see where this is going...a performance standard must be fully identified before developing the training. This is important for two primary reasons. First, we have to know the outcomes of the training before we can design it. Second, we need to know how to measure the effectiveness of the training and audience performance when we get to the evaluation step.

Other things to identify in this phase include: 
  • Who is the target audience? What are their differences, and what do they have in common? What characteristics do they have that will influence the training design? (such as language/nationality, experience, education level, etc.)
  • Who are the stakeholders on this project?
  • What resources will this project require, and what are available?
  • What is the timeline for this training?
  • What are the delivery options and constraints?

The second step is design. Using the information gathered in the analysis phase, you will begin designing the training using instructional strategies and principles. This includes items such as identifying the type and delivery method of the training; developing documentation, scripts or storyboards for the training; creating a user experience strategy; and identifying or designing exercises and activities that will support the desired outcome from the training. It is also important to not forget elements such as assessments and feedback in the lesson plan. These items will be important when we get to the evaluation phase. Assessments allow us to see how well the audience is learning the material; feedback will allow us to get their input and opinions on the training.



The third step is develop. Now that the design is complete, you must actually gather or build the content assets for the training. This may include developing presentations, audio, visual, or multimedia content; programming for computer-based training; documentation that is provided to students such as books or handouts; instructor manuals; and any other asset required to deliver the training. Assets may also include any props, trainers, or other equipment that are being used in the training.



The fourth step is implement. Implementation means actually executing on the training plan and delivering the training. However, there may be multiple prerequisites before delivering the training to the audience, such as training the trainers or instructors who will be delivering the training materials. It may also require deploying the assets or training materials to electronic delivery systems such as a learning managment system. If this delivery method is new to the audience, it may also include a strategy to train the audience on how to use the new system or providing proctors to assist them. 




The fifth and final step is evaluate. While some level of evaluation should be occurring at each step of this model, this is the stage where a summative evaluation of the entire strategy and training is completed. Feedback from the audience is one element of this evaluation, but the more important factors are the performance measurement metrics that were identified in the analysis stage. These measurable objectives established in the analysis stage will now be evaluated to determine the effectiveness of the training. It is not unusual to repeat the entire process again at this point, addresssing any shortcomings or gaps that were not resolved with the initial design.


Resources:
Andrews, D.H., & Goodson, L.A. (1980). A comparative analysis of models of instructional design. Journal of Instructional Development, 3 :4, 2-16.

Gentry, C.G. (1994). Introduction to instructional development. Belmont: Wadsworth. Grafinger, D.J. (1988). Basics of instructional systems development. INFO-LINE Issue 8803. Alexandria: American Society for Training and Development.

Gustafson, K.L. (1994). Instructional design models. In T.Husen & T.N. Postlethwaite (Eds.), The international encyclopedia of education (2nd ed.). Oxford: Pergamon.

Gustafson, K.L., & Branch, R.M. (1997). Survey of instructional development models. Syracuse: ERIC Clearinghouse on Information & Technology. 



Assessment timing and purpose

Last week we discussed assessments. Let's review - 3 of the most common types of assessments are diagnostic, formative, and evaluati...