Using Observations to Measure Instructional Value
It's been a long, arduous
process but the new (course, module, workshop, tool, website) has
finally rolled out. So--what to do now? How will you know if the
intervention on which you've worked so long and hard is "usable" or
instructionally effective? In this course, we’ll focus on the
role observation plays in determining instructional impact and quality —strategies
for conducting it; tools for capturing the data in which you’re
interested (without putting your learners at risk); ways to reduce
bias and improve objectivity and rigor; and techniques for analyzing
and reporting results.
This is a practical course--designed to help you use a key evaluative technique (observations) to assess instructional impact, effectiveness, and quality. Observation is common, of course, as these three examples suggest.
This course targets anyone interested in learning more about using observations to measure the interventions we design, manage, or facilitate--whether the intent is formative or summative. The instructor assumes basic familiarity with Microsoft Word and Excel.
At the end of this course, you'll know:
Breeze presentations (available January 29)
ECR readings, both book chapters and journal articles (available by January 29)---from such experts as Good and Brophy, Robinson and Robinson, Hale, Patton, and Rosenberg. [The password is observations.]
And these websites will prove useful as well!
And we'll have a course blog to which you'll contribute during class--and that, with some forethought, can become a resource available to the larger SDSU/EDTEC community.
Session 1: February 3, 2007 -- Observation 101
We'll cover basic terminology as we attend to key drivers that help us determine what to observe (and how formally); on whose perspective to focus; constraints and obstacles to control (or at least recognize); how "labor" factors into tool or instrument design; how technology can both assist and detract from the process; and ways to produce reader-friendly analyses on which real people can act.
We'll also join the debate about the credibility of eyewitness testimony.
But first ... let's explore some optical illustions!
This session is hands-on, with a focus on techniques you might use in a traditionally-delivered workshop or course ... as well as in the field. You'll complete a number of practices--independently and with a partner--that "personalize" the tips and strategies covered during the day.
You'll plan and conduct a "traditional" observation based on one of three general scenarios that I'll introduce in class. Prior to gathering data, you'll forward a data collection plan organized as follows:
Both your classmates and I will critique your work --using a checklist I'll create in Zoomerang. We'll use Quick Topic to share work and garner feedback--and a course blog to share ideas, resources, tips ... and more!
Due date: to be announced
Session 2: February 17 -- The virtual world ... and where observation fits in
Here we transition to the virtual world--covering common situations that you might face as a performance technologist. What happens when the nuances of interaction that we take for granted (body language, voice intonation, eye contact, etc.) simply aren't presented or don't unfold naturally?
This session is hands-on, with a focus on techniques you might use when the setting is virtual--although not necessarily synchronous. You'll complete a number of practices--independently and with a partner--that "personalize" the tips and strategies covered during the day.
You'll once again:
Both your classmates and I will critique your work (via a Zoomerang checklist) and we'll use Quick Topic and our course blog to garner feedback, and share ideas, resources, and tips.