Thursday, July 7, 2011

Balance between Online and Offline Administering and Scoring of Assessments

Both constructed response tests and fixed response tests are valuable tools for eLearning.  A constructed response test asks students to provide an answer.  An example of this type of test is a fill-in-the-blank.  Fixed response tests are tests such as multiple-choice, true-false tests, or even matching.  These two types of tests each have different qualities which make them beneficial in eLearning for specific purposes.  However, according to Oosterhof, Conrad, Ely (2008), due to technical and practical issues, there must be a balance between administering these types of tests online and having the computer score them or administering them in writing and having the teacher grade them.
Fixed response tests sample the content more adequately than constructed response tests, such as essay tests, because more questions can be included on the fixed response test than on an essay test due to the fact that it takes longer to complete an essay test than a multiple-choice or true-false test.  Because they sample more of the content, they can be used as summative assessments.  However, they can work just as well as a formative assessment since instructors can often program the online assessment to give feedback to students when a wrong answer is chosen.  Fixed response tests measure procedural knowledge such as concepts and rules more readily than constructed response tests.  They structure the problem to be addressed more effectively than constructed response test because constructed response items often can have several correct responses based on the learner’s interpretation of what the test item is asking.  The fixed response test eliminates this by providing only four responses to choose from with only one correct answer. 
Constructed response tests are not subject to guessing like fixed response tests.  Also, more time is required to build a fixed response test than a constructed response test.  They have three characteristics which make them best used in for formative tests:  they are best designed for use with key words of a unit or lesson, they tend to measure lower levels of thinking such as recall of information, and they tend to have a higher scoring error.  They can have a higher scoring error (if scored online and not by the instructor) because the answer supplied by the student may be counted as incorrect even if it is correct due to the computer not having a complete list of all possible answers.  For example, if a student answers a question in the plural form of the answer, but the computer answer key only has the singular version of the answer, it will be counted incorrect even though it is right.  Fixed response items are more easily scored by a computer than constructed response items because they contain only one correct answer.  However, essay tests do a better job of directly measuring the behaviors required in the instructional objective.  Therefore, they may work well as a summative.  Since they do not sample a broad area of the content domain being studied, and since they allow learners to explain their logic, they may serve a formative purpose better however.  Due to the time constraints with essay items, both in taking and scoring the exam, they are usually used more sparingly.
Due to several reasons, sometimes it is better to have these tests administered and graded online by the computer and sometimes it’s better to have them drawn up and graded by the instructor.  Making an online test can be time consumptive and difficult depending on the capabilities of the software or learning management system (LMS) being used.  Therefore, if it saves time, it may be best to hand administer and score the test.  As mentioned above, some assessments don’t lend themselves well to computer grading just yet (such as essay and completion items).  However, with online administration, feedback can be given immediately.  This is an important factor for effective feedback.  Instructors can diagnose problems (problems with test validity and formative assessments of achievement) while a test is being administered online.  All of which is necessary for effective instruction and student growth and learning.  Another important reason to administer and score online has to do with the nature of online courses.  Since students in open entry/open exit online courses may have different start dates, it makes sense to allow students access to tests automatically based on that start date.  Students taking the test at different times would make it difficult for an instructor to grade several different assessments at the same time, therefore, online testing with computer generated scores eases the burden of the instructor.  Of course, we can’t forget to discuss security issues with both paper and pencil and online assessments.  Depending on the purpose and importance of a test, the security required will vary.   Most formative assessments will require less security than summative assessments.  Distributing tests to students may allow them to copy and distribute the test to other students.  Therefore, it may be beneficial to have it administered online where the software or LMS being used can prevent a student from copying and pasting the test.  Another method would be to have the student take the assessment at a secure facility with passwords and supervision.
Again, deciding on how to administer and score assessments in an online setting depends on technical and practical issues which will vary by the school and the technology being employed.  Generally speaking, we need to use technology to support and improve the teaching/learning experience and not to add extra burdens or impediments.
References
Oosterhof, A., Conrad, R., Ely, D.  (2008).  Assessing Learners Online.  Upper Saddle River, NJ:  Pearson Education, Inc.

No comments:

Post a Comment