Tuesday, July 19, 2011

Backwards Design: How it Ensures Validity

Accurately measuring our students’ learning requires us to know what we expect students to be able to do and to be sure that the assessment is measuring exactly that.  This is important because we want to ensure our assessments are valid and reliable.  Our teaching will be more focused as well.  To accomplish this, we must use Backward Design when planning lessons, courses, or programs.  According to Lesson 1 of Rio Salado’s “Assessing Learners Online” course (2011), the steps of Backward Design are: 
Step 1: Choose the content
Step 2: Learning Outcomes
Step 3: Performance Objectives
Step 4: Assessment
Step 5: Activities/Materials
Backwards design forces us to first look at what we want the students to learn before we start planning activities or assessment.  We determine this through a series of steps.  First we decide what content to teach (i.e. is it a math lesson covering the order of operations or is it a science lesson to teach the scientific method).  We should consult content standards, grade-level expectations/curriculum, and even students’ interests. 
Then, we must write learning outcomes.  These are general statements of what students should be able to do at the end of a lesson, course, or program.  They are similar to goals and they are broader statements of capability.  From those outcomes, we must narrow our focus and write specific performance objectives which include an actual, observable verb that describes what students will actually be able to do to demonstrate their learning.  This is a very key part of the process.  According to Oosterhof, Conrad, & Ely (2008), the verbs chosen must be observable and they must be able to demonstrate that the particular type of knowledge we are teaching has been learned.  The types of knowledge include declarative knowledge, procedural knowledge and problem solving knowledge.  It is important to determine if a lesson teaches declarative knowledge, procedural knowledge, or problem solving knowledge because the objective must be written using a verb that is capable of showing that students have acquired that type of knowledge.  For example, if we want math students to know that operations within parenthesis should always be computed first (declarative knowledge), we must write an objective that asks students to state that knowledge.  However, if we want students to apply the rule (procedural knowledge), then we must be sure our objective asks students to do something that shows us they know how to apply the rule such as solve a problem using order of operations rules.  Otherwise, we may be asking students to demonstrate something other than what our objectives require of them-which, of course, means our assessment results are invalid.
Next, we can design assessments.  Using the solid performance objectives we have written, we can create assessments that will accurately measure what they are supposed to measure.  In other words, they will be valid.  Finally, we can plan activities, lessons, and select materials to be used.  Since, at this stage, we know what we are looking for in students who have learned what we want them to, we can begin to plan learning activities and lessons and choosing materials.  Our lessons will be better designed due to this process.
This method forces teachers to determine what they must observe in students who have met the objectives before they begin to teach instead of teaching first and then trying to come up with an assessment.  In other words, teachers know what to look for to determine who has learned the material.  It ensures that learning outcomes and performance objectives are based on standards and grade-level curriculum, as well as substantiating the validity of the assessment(s).


References
Oosterhof, A., Conrad, R., Ely, D.  (2008).  Assessing Learners Online.  Upper Saddle River, NJ:  Pearson Education, Inc.
Rio Salado (2011).  ELN 122-Evaluation Online Learning- Lesson 1:  Measuring Knowledge Online.  Retrieved from Rio Salado website:  https://www.riolearn.org/content/ELN/ELN122/ELN122_INTER_0000_v1/lessons/lesson01.shtml.

Tuesday, July 12, 2011

Performance Assessments in Online Education

Performance Assessments are used in traditional, face-to-face settings as well as online settings.  They are a valuable tool in either setting, but have specific applications, advantages, and limitations in each.  I will define and discuss them and their advantages and limitations in online education.
According to Oosterhof, Conrad, & Ely (2008), performance assessments involve students with performing a task, which involves several steps and requires specific skills, usually in order to create a product.  They are used in skills based and educationally based classes as well as on the job training and evaluation.  They are not a multiple choice, true/false, fill-in-the blank type of test.  However, a performance assessment can be a written document if the course objectives require it.  For example, in order to determine if a student has internalized and can use/apply grammar rules, a teacher might observe actual writings from the student.  A performance assessment must be designed to measure an explicit objective or goal from the course.  In other words, the task or product of the performance objective must themselves be course objectives or goals.
Performance assessments can be graded by looking at the process the student goes through to complete the assessment or they can be graded by examining a final product.  Often, the process is simulated-especially when it is too expensive or dangerous to have the student perform the task in reality.  For example, simulations are often used in a military training facility to prepare soldiers for a mission/battle.  Sometimes, however, the performance assessment does involve having the student actually perform a task and/or create a product.  The instructor will observe and judge either the process of creating the product or the product itself, but not usually both.  For example, in a cake baking class, a student’s final might be to bake a cake and the instructor would sample the cake and judge it (the final product) based on the qualities of a good cake in order to determine if the student has met the objective of the class (baking a cake).  Of course this is a very simple example, from a skills based class instead of an educational class.  However, performance assessments can be used in every type of educational class as well.
The instructor/observer has two choices about how to structure the assessment.   He/she can either prompt the student to do specific steps/tasks in the process or they can simply observe without prompting.  The specific situation being observed will usually dictate this decision.  For example, if a company is hiring someone to work in a customer call center, they may want to observe the person helping a customer on the phone without prompting them to find out if the person has the right skills necessary for the job.  However, this will make it difficult to determine the range of skills the person has because not every situation can be covered in one to a few calls.  According to Oosterhof, Conrad, & Ely (2008), prompting students tends to bring out maximum performance whereas not prompting them will produce a student’s typical performance.  Because of this, when a supervisor or instructor wants to observe personality traits, work habits, and a willingness to follow prescribed procedures, he/she should not prompt the worker/student.  However, prompting is needed if he/she wants to see how well a student can explain a concept orally, write a paper, or play a musical instrument.
Since performance assessments involve students actually doing something, they are able to determine well a student’s skill at activities like playing a musical instrument, sports performance, creating and conducting an experiment in a science lab, or creating works of art.  Written tests would not be able to accomplish this.  However, due to space and time separation in asynchronous, online settings, performance assessments can’t be used online to measure these types of skills.  Therefore, online education is limited to what types of classes/skills can be taught online as well as what types of skills can be assessed online. 
Due to the space and time separation with asynchronous, online courses, observing the process versus the product is also very limited or impossible.  In other words, online instructors are not able to create performance assessments that directly observe and measure the steps/tasks a student will complete because schedules of online instructors and students do not often coincide with each other, hence one of the reasons for taking classes online.  This does not rule out using performance assessments in online settings to grade the product, which is how it is usually employed in online settings.  In this way, the instructor can indirectly observe the process by looking at the product.  However, it does present a problem for using performance assessments as formative assessments in online settings since formative assessments rely on observing the process that one goes through to create a product.  Since performance assessments often help us to see and understand a student’s reasoning and understanding, (either directly through observing the process or indirectly through observing the product) they can provide insights about students that written tests cannot.  Having online students participate in creating an actual product (such as an education student creating an actual test) will help the instructor better determine what the student can do with the course knowledge than a written test (about rules and procedures for creating a test) could determine.
In online settings, performance assessments can be used to teach complex skills if they are used effectively because students who know they will have to complete a performance assessment will learn the material differently than if they were studying for a written test.  Also, the teacher will be more likely to teach students differently in order to ensure their success with the performance assessment.  In other words, they will not only memorize information, they will try to understand it so they can use and apply it.  This should be the goal of both face-to face and online settings because our goals should always be to help student use higher levels of thinking.   Since many online classes are designed for working professionals, their reasons for taking the classes require them to not only know something (declarative knowledge), but also be able to use and apply the knowledge (procedural knowledge).  They also tend to need to be able to overcome difficulties and solve problems in their work environments.  Because of this and the fact that performance assessments can measure problem solving skills, whereas written tests cannot, they are a very natural, well-suited, and valuable tool in an online educational setting.
Another reason performance assessment is well suited for online learning is because they require less security than written tests require.  Often, the scoring plan or an actual model will be given to the students before they are required to complete the performance assessment, whereas the answer key for a written test would never be given to the students.  This is especially helpful in an online environment since security issues are often more difficult to overcome than in face-to-face settings.
References
Oosterhof, A., Conrad, R., Ely, D.  (2008).  Assessing Learners Online.  Upper Saddle River, NJ:  Pearson Education, Inc.

Thursday, July 7, 2011

Balance between Online and Offline Administering and Scoring of Assessments

Both constructed response tests and fixed response tests are valuable tools for eLearning.  A constructed response test asks students to provide an answer.  An example of this type of test is a fill-in-the-blank.  Fixed response tests are tests such as multiple-choice, true-false tests, or even matching.  These two types of tests each have different qualities which make them beneficial in eLearning for specific purposes.  However, according to Oosterhof, Conrad, Ely (2008), due to technical and practical issues, there must be a balance between administering these types of tests online and having the computer score them or administering them in writing and having the teacher grade them.
Fixed response tests sample the content more adequately than constructed response tests, such as essay tests, because more questions can be included on the fixed response test than on an essay test due to the fact that it takes longer to complete an essay test than a multiple-choice or true-false test.  Because they sample more of the content, they can be used as summative assessments.  However, they can work just as well as a formative assessment since instructors can often program the online assessment to give feedback to students when a wrong answer is chosen.  Fixed response tests measure procedural knowledge such as concepts and rules more readily than constructed response tests.  They structure the problem to be addressed more effectively than constructed response test because constructed response items often can have several correct responses based on the learner’s interpretation of what the test item is asking.  The fixed response test eliminates this by providing only four responses to choose from with only one correct answer. 
Constructed response tests are not subject to guessing like fixed response tests.  Also, more time is required to build a fixed response test than a constructed response test.  They have three characteristics which make them best used in for formative tests:  they are best designed for use with key words of a unit or lesson, they tend to measure lower levels of thinking such as recall of information, and they tend to have a higher scoring error.  They can have a higher scoring error (if scored online and not by the instructor) because the answer supplied by the student may be counted as incorrect even if it is correct due to the computer not having a complete list of all possible answers.  For example, if a student answers a question in the plural form of the answer, but the computer answer key only has the singular version of the answer, it will be counted incorrect even though it is right.  Fixed response items are more easily scored by a computer than constructed response items because they contain only one correct answer.  However, essay tests do a better job of directly measuring the behaviors required in the instructional objective.  Therefore, they may work well as a summative.  Since they do not sample a broad area of the content domain being studied, and since they allow learners to explain their logic, they may serve a formative purpose better however.  Due to the time constraints with essay items, both in taking and scoring the exam, they are usually used more sparingly.
Due to several reasons, sometimes it is better to have these tests administered and graded online by the computer and sometimes it’s better to have them drawn up and graded by the instructor.  Making an online test can be time consumptive and difficult depending on the capabilities of the software or learning management system (LMS) being used.  Therefore, if it saves time, it may be best to hand administer and score the test.  As mentioned above, some assessments don’t lend themselves well to computer grading just yet (such as essay and completion items).  However, with online administration, feedback can be given immediately.  This is an important factor for effective feedback.  Instructors can diagnose problems (problems with test validity and formative assessments of achievement) while a test is being administered online.  All of which is necessary for effective instruction and student growth and learning.  Another important reason to administer and score online has to do with the nature of online courses.  Since students in open entry/open exit online courses may have different start dates, it makes sense to allow students access to tests automatically based on that start date.  Students taking the test at different times would make it difficult for an instructor to grade several different assessments at the same time, therefore, online testing with computer generated scores eases the burden of the instructor.  Of course, we can’t forget to discuss security issues with both paper and pencil and online assessments.  Depending on the purpose and importance of a test, the security required will vary.   Most formative assessments will require less security than summative assessments.  Distributing tests to students may allow them to copy and distribute the test to other students.  Therefore, it may be beneficial to have it administered online where the software or LMS being used can prevent a student from copying and pasting the test.  Another method would be to have the student take the assessment at a secure facility with passwords and supervision.
Again, deciding on how to administer and score assessments in an online setting depends on technical and practical issues which will vary by the school and the technology being employed.  Generally speaking, we need to use technology to support and improve the teaching/learning experience and not to add extra burdens or impediments.
References
Oosterhof, A., Conrad, R., Ely, D.  (2008).  Assessing Learners Online.  Upper Saddle River, NJ:  Pearson Education, Inc.

Friday, July 1, 2011

Differences and Uses of Constructed Response vs Selected Response Tests in eLearning

A constructed response test asks students to provide an answer.  An example of this type of test is a fill-in-the-blank.  Fixed response tests are tests such as multiple-choice, true-false tests, or even matching.  They each have different qualities and appropriate uses for eLearning.  Some differences have to do with the advantages and disadvantages of each type of test. 
Some of the advantages for fixed response tests include that they sample the content more adequately than constructed response tests such as essay tests because more questions can be included on the fixed response test than on an essay test.  This is due to the fact that it takes longer to complete an essay test than a multiple-choice or true-false test.  They measure procedural knowledge such as concepts and rules more readily than constructed response tests.  They can ask learners to classify responses as examples or non-examples and apply a rule by asking them to find the response that correctly applies the rule.  They structure the problem to be addressed more effectively than constructed response test because constructed response items often can have several correct responses based on the learner’s interpretation of what the test item is asking.  The fixed response test eliminates this by providing only four responses to choose from with only one correct answer.  Fixed response items are more easily scored by a computer than constructed response items because they contain only one correct answer.  Constructed response items may have correct answers that were unanticipated by the instructor/designer and therefore not included in the computer’s answer key.  This means that a student may answer correctly, but the computer scores it as incorrect because the answer was not included in the answer key.  Computer generated feedback can be provided for responses to individual items.
Some of the disadvantages of fixed response items are that they are subject to guessing, whereas constructed response tests are not.  Also, more time is required to build a fixed response test than a constructed response test.
Since fixed response tests are able to sample more of the content than constructed response tests, they can be used as summative tests.  Also, they can be more easily scored automatically and therefore work very well in an online environment.  However, they can work just as well as a formative assessment since instructors can often program the online assessment to give feedback to students when a wrong answer is chosen.  Since constructed response items such as fill in the blank tests are best designed for use with key words of a unit or lesson and they tend to measure lower levels of thinking such as recall of information, they would be best employed as a formative assessment.  Also, since they do seem to be associated with a higher scoring error, this is the best place for them.  However, essay tests do a better job of directly measuring the behaviors required in the instructional objective.  Therefore, they may work well as a summative.  Since they do not sample a broad area of the content domain being studied, and since they allow learners to explain their logic, they may serve a formative purpose better.  Due to the time constraints with essay items, both in taking and scoring the exam, they are usually used more sparingly.

References

Oosterhof, A., Conrad, R., Ely, D.  (2008).  Assessing Learners Online.  Upper Saddle River, NJ:  Pearson Education, Inc.

Friday, June 24, 2011

Advantages and Disadvantages of Contructed-Response Tests

According to Oosterhof, Conrad,  & Ely (2008) constructed-response assessments include short and long essay tests and fill in the blank tests (or completion items as they are otherwise called).  They list and discuss the advantages and disadvantages to using these types of assessments, which stem from the test itself to the making and grading of the test.  I will discuss completion items first, and then I will cover essay tests.
The advantages of completion tests are three:  the test is easy to make, students must give instead of choose an answer, and the number of questions in this type of test can be many.  The test is relatively easy to make due to the fact that they usually measure recall of information instead of procedural knowledge.  Another reason they are easy to make is because they do not require scoring plans such as those that essay tests require.  Since students must give an answer instead of picking an answer (such as they would on a multiple choice test), scores are not negatively affected by guessing as is the case with multiple choice tests.  Therefore, completion tests are generally more reliable than selected-response tests.  The number of items on these types of tests can be many, which means a better, more complete sampling of the content can be achieved. 
Limitations of completion tests are two:  they generally measure the recall of information and have a higher scoring error probability than do other formats.  Completion-tests measure knowledge of facts, and thus do not generally require higher level thinking skills, which is a goal of a good educational program.  Since answers to these types of questions can often have several correct responses, the probability that they will be mechanically scored incorrectly is very present.  For example, if a student answers a question in the plural form of the answer instead of the singular, it could be counted wrong (even though it is correct) if the instructor/designer didn’t include that plural response as acceptable when the test was designed and put online.
When designing these types of tests, take several steps to ensure reliability and validity.  Always ensure that the items measure the behavior required in the instructional objective, ensure that the reading level of the test is below the reading level of the students in order to prevent a student’s reading proficiency to affect his/her test results (unless the test is an actual reading test), and that the items are written in very direct language.  Also, be sure that the blanks represent key words from the learning.  Otherwise, the test will measure reading comprehension more than achievement of learning objectives.  As was mentioned already, ensure that only a single or set of very homogenous set of responses represent a correct answer.  Do not use sentences from the actual readings of the class.  This can be problematic because it encourages students to memorize instead of reading and comprehending.  Sentences taken out of a paragraph lose their contextual references/clues and therefore can be misinterpreted.  Place blanks at the end of the question instead of the beginning or middle of the item to make it easier and faster for students to read the item and supply an answer.  Use only one blank per item in order to help the efficiency with which a student can understand what the question is asking and also to eliminate a larger set of correct responses.  Finally, if you are using a question that requires a numerical answer, be sure to include the units expected in the response in order to prevent a student from giving an answer in a different unit than what is required and therefore getting a wrong answer (when it actually is correct, but just listed in a different unit).  For example, if an answer is 36 inches, make sure you specify the answer must be given in inches; otherwise, the student might answer 3 as in 3 feet. 
Essay tests have three advantages and three disadvantages.  Advantages are:  they measure more directly the behaviors specified by the instructional objectives, they examine the learner’s ability to communicate ideas in writing, and they help instructors gain insight into the thinking that leads to students’ answers which can reveal a student’s logic.  Since the instructional objectives can many times be rewritten as an essay test question, essay items can measure the behavior more directly.  Therefore, it is important to take care when writing performance objectives.  Even though essay tests can help instructors measure a student’s ability to express their thoughts in writing, the goal of the test should be to measure how well a student met the instructional objectives.  Therefore, two scores should be used if the instructor wants to evaluate the student’s writing ability:  one to measure the proficiency with which the student met the objective(s) and one to measure the writing ability.  Another important note here is to be sure not to use the essay test as a means to teach students to write.  It is not an effective teaching method because it is a testing situation, not a learning situation.  Finally, the last advantage involves ensuring that the student is not using the wrong logic to reach an answer.  For example, with a multiple choice test, even though a student chooses a correct answer, it isn’t possible to see why they chose the answer.  With an essay question, the student will have to answer correctly and explain his reasoning.
Disadvantages include a smaller sampling of the content than other formats, scoring can be very subjective, and they take more time to score than other formats.  The reason essay tests can’t sample as much of the content as other formats is due to the time it takes for a student to respond to the question.  Because of this, teachers must take time to write well-developed, strong essay test items.  They should also create scoring plans for each essay question that defines a correct answer and how many points per each critical item within the essay will be given.  Which leads to another disadvantage of this format:  the scoring can be very subjective and even include bias.  Finally, the third disadvantage is the amount of time to score the essay test is longer than other formats.
When designing essay tests, teachers should try to follow certain guidelines.  The response required by the essay item may be brief or extended.  Extended items may not be appropriate for online settings if the question is asking learners to demonstrate more than one skill.  Instead, it would be better to break the long question down into shorter questions.   This will allow the scoring to be more consistent and will allow broader skills to be assessed.  Of course, this is only possible if the questions ask students to demonstrate declarative or procedural knowledge.  If the question is asking online students to solve a problem, another test format other than an essay test should be used.  Therefore, avoid asking online students to problem solve in an essay item. 
  In order to develop high quality essay items, teachers should follow six criteria.  First, always ensure that the item measures the specific skill/instructional objective.  One way to accomplish this is to be sure to NOT allow students to choose the items they will answer.  If you do, be sure that all choices assess the same capability.  Second, the reading level should be below that of the learner in order to ensure we are measuring the skill and not a student’s reading level.  Third, the question should take only ten minutes to answer.  Otherwise, it is an extended response item, which should be avoided in online settings.  Fourth, a good scoring plan must be devised to ensure the validity of the test.  Fifth, the scoring plan should describe a correct and complete response so scorers will be able to identify correct responses more accurately.  Last, the item should be written in such a way that the knowledgeable learner will be able to determine the characteristics of a correct answer.
When grading online essay tests, there are certain things that can be done to ensure more consistent scoring.  Teachers should be able to read all of the students’ answers given to a certain item before reading the responses to the next item.  This helps instructors complete the scoring more quickly and to maintain an clear idea of expectations for that item instead of going through one paper at a time and trying to recall all the expectations for all of the items.  Teachers should always read the responses of students in a random order and the papers should be reordered after reading/scoring each item.  Research has shown that the quality of a previous paper can affect the scoring of the next paper.  Another precaution to ensure more consistent scoring would be to conceal the identity of the student while grading so that no bias exists.  Using multiple readers is another way.  Finally, items that cause students to answer in various ways to the same question should be revised before being used again as they make it difficult to substantiate whether students have met the instructional objective.



References
Oosterhof, A., Conrad, R., Ely, D.  (2008).  Assessing Learners Online.  Upper Saddle River, NJ:  Pearson Education, Inc.

Thursday, June 16, 2011

Best Type of Referenced Assessment for Online Learning

The four types of referenced assessments are:  ability, growth, criterion, and norm referenced tests.  The type of reference used will provide specific information, which is useful to educators.  Therefore, the type of reference used will depend on the type of information the educator needs.  This is dependent upon the purpose of the educator, the program or the course, which is in itself dependent upon the grade level of the course (primary or secondary).
Ability referenced assessments compare a learner’s performance with their potential performance.  Growth-referenced assessments compare a learner’s performance with past performance to determine growth.  Criterion-referenced assessments compare a learner’s performance with specific criteria such as goals, outcomes, or objectives.  Norm-referenced assessments compare a learner’s performance with a similar group of students. 
In my opinion, criterion-referenced assessments should be used in online assessments to grade students.  The goal of most online learning is for students to walk away with information and knowledge for the purpose of applying it in their profession.  According to The Centre for Learning and Professional Development at the University of Adelaide (2001), “…in higher education the aim is to also use the subject matter to teach students to think, to develop higher-level cognitive skills including metacognition (think about their thinking). Higher-level cognitive skills include solving problems, analyzing arguments, synthesizing information from different sources and applying what they are learning to new and unfamiliar contexts. To be effective, assessment needs to be an integral part of the learning environment and embedded into the design of the course which involves aligning learning objectives with assessment.”  In order to do this, educators often use formative assessments-testing that is done to diagnose what students haven’t grasped yet and still need in order to reach the objectives.  Again, the Centre (2011) says, The purpose of student assessment is to provide support and feedback to enhance ongoing learning and identify what students have already achieved.”  Criterion referenced assessment will serve to help us know if students have met the objectives or not and it will drive instruction based on students’ needs as determined throughout the course.  According to Oosterhof, Conrad, and Ely (2008), formative assessments work well with criterion referenced interpretations because formative assessments cover specific content and show what a learner can or can’t do.  Therefore, criterion-referenced assessments help us to substantiate that students are well prepared for their profession and serve online purposes the best.
Growth-referenced assessments can provide information about how much a student has learned compared to what they knew when they started, but it will not inform us about their overall grasp of the content domain being taught in the course-unless, of course, the pre- and post-test is a good sampling of the content domain of the course.  Norm-referenced assessments would rank the students in the class (and only the students in the class), but that will not reveal whether or not the students learned what they need to know, neither will it allow us to compare them to a larger group of students in order to get a better indication of their performance.  Ability referenced assessments might tell us which students are most likely to succeed, but not how much a particular student is capable of accomplishing.
When we talk about K-12 online learners, we still need to establish that our learners have accomplished what we have set out for them to accomplish.  Therefore, criterion-referenced assessments are also best used here as well.  In both levels, primary and secondary, there are also needs for the other referenced assessments:  entrance, placement, diagnostic purposes name a few.
References
Centre for Learning and Professional Development.  (2011).  Effective Learning.  Retrieved June 16, 2011 from http://www.adelaide.edu.au/clpd/online/assessonline/effectivelrng/
Oosterhof, A., Conrad, R., Ely, D.  (2008).  Assessing Learners Online.  Upper Saddle River, NJ:  Pearson Education, Inc.

Saturday, June 11, 2011

The Difference Between Education and Training

Training and Education are not the same thing.  When designing assessments, it is important for teachers to understand the difference between the two.  Training is teaching a skill, which usually involves motor skills and contains a complex set of actions or activity.  For example, making pottery is a skill.  This is a skill that can be learned, but takes time to hone and perfect.  Education, on the other hand involves more of the process of thinking and reasoning.  Students in an educational program will learn about broad topics that will allow them to problem solve and apply their knowledge in new situations.  When in an educational program, students study only a small portion of the whole idea in order to form a foundation with which to draw from when placed in real world situations that require them to use the knowledge gained.  In real life and in the classroom, the line between the two is sometimes blurred.  An excellent example can be found in a letter to the editor of National Forum: The Phi Kappa Phi Journal, Spring 2000, p. 46 from Robert H. Essenhigh of the Ohio State University which is reproduced at this website:  http://www.uamont.edu/facultyweb/gulledge/Articles/Education%20versus%20Training%20.pdf.  In the letter Mr. Essenhigh states:  “The difference? It's the difference between know how and know why. It's the difference between, say, being trained as a pilot to fly a plane and being educated as an aeronautical engineer and knowing why the plane flies, and then being able to improve its design so that it will fly better. Clearly both are necessary, so this is not putting down the Know-How person; if I am flying from here to there I want to be in the plane with a trained pilot (though if the pilot knows the Why as well, then all the better, particularly in an emergency).”

An excellent example of how the two can be confused happens in the classroom with teaching reading.  Many have asked if reading is a skill or not?  If it is a skill, students can be trained to read.  If it involves education, then a different approach is required.  Sounding out words is a skill that can be taught (known as decoding in educational circles) even though abstract concepts such as random symbols being attached to specific sounds are involved.  However, being able to sound out and read words doesn’t guarantee that someone will understand the words they are reading.  Since reading involves both decoding and comprehension/understanding, we can say that it also involves education.  Students must be taught the skill of reading and the more complex concepts behind the skill in order to effectively read.

Training and education each require a different approach to assessment.  If we are training, we will need to make sure we assess all critical skills for the complex performance the training is supposed to teach.  Otherwise, we can’t be sure the person being trained will be successful when performing the skill.  This would be very bad in certain situations, such as the pilot analogy used above.  A task analysis must be done to determine all of the parts of the skill in order to be able to teach and assess them.  If the content represents education instead of training, a broader base of knowledge will be the topic of our lessons.  Basically, training and education differ in that training teaches us to perform a complex task that usually involves motor skills whereas education helps us to know how to think about things, how to problem solve, how to understand the why behind things.  Since education is so broad, only a portion of the whole can be learned and then, only a small part of what is learned can be assessed.  The reading analogy is a good example here, again.  We can educate students to think about what they have read and make connections and evaluations on the material which they can then apply in future readings outside of the classroom.  However, we could never possibly imagine all the different types of situations they might encounter when reading outside the classroom or the many different connections or evaluations they may make in those future activities.  Neither can we assess those many and varied future situations.  Instead, we must narrow our focus and decide specifically what to teach and assess based on our focus.  For example, we might narrow our focus to understanding and comparing poetry.  Then, we would decide the important concepts about poetry which need to be taught and assessed.