Wednesday, May 6, 2009

Concept inventory

Concept inventory
From Wikipedia, the free encyclopedia
Jump to: navigation, searchA concept inventory is a multiple choice test designed to evaluate whether a person has an accurate and working knowledge of a specific set of concepts [1]. Concept inventories are built in a multiple choice format to ensure that they can be scored in an objective manner. Unlike a typical multiple choice test, however, both the question and the response choice are the subject of extensive research designed to determine both what a range of people thinks a particular question is asking and what the most common answers are. In its final form, the concept question is presented both a correct answer as well as distractors, that is, incorrect answers based on commonly held misconceptions.
Because the distractors are based on common held student views, identified through various research methods, e.g. responses to open-ended essay questions and "think-aloud" interviews with students, which distractors are chosen by students is often informative when it comes to understanding student thinking. This research basis, in which actual student thinking/language rather than expert's opinions or speculations about student assumptions underlies instrument construction and design, is a major difference between standard tests and concept inventories. As a result the most important role for concept inventories is to provide instructors with clues as to the ideas, Scientific misconceptions, and/or conceptual lacunae, with which students are working, and which may be actively interfering with learning.
Because of their focus on concept mastery, which may or may not be achieved in a particular instructional situation, concept inventories are not equivalent to standardized tests nor are they subject to the same statistical tests of validity. The validity of a concept inventory rest upon the probability that i) when a student selects the correct answer they actually understand the underlying concept and ii) when they select a distractor, they believe the concept embodied in that statement is correct. In general, evidence for concept inventory validity comes from direct interviews with students.
The pioneering effects of David Hestenes, Ibrahim Halloun and Malcolm Wells led to the first of the concept inventories to be widely disseminated, the Force Concept Inventory. The FCI was designed to assess student understanding of the Newtonian concepts of force. The dramatic result of using the FCI with students completing an introductory college level physics courses was the realization that while “nearly 80% of the student’s could state Newton’s Third Law of at the beginning of the course … FCI data showed that less than 15% of them fully understood it at the end” (Hestenes, 1998. Am. J. Phys. 66:465).
This is a schematic of the Hake plot, see: Redish page.These results have been replicated in a wide number of studies of students at a range of institutions (see Hake, 1998). Am. J. Phys. 66:66), and have led to recognition in the Physics education research community of the importance of "active engagement" of students with the materials to be mastered.
In recent years, the FCI has been joined by other instruments in physics. These include the (1998) Force and Motion Conceptual Evaluation (FMCE) developed by Thorton & Sokoloff and the Brief Electricity and Magnetism Assessment (BEMA) developed by Chabay & Sherwood. A discussion of how these tests are developed is in R. Beichner, "Testing student interpretation of kinematics graphs," Am. J. Phys., 62, 750-762, (1994).
Information about physics concept tests is available at the NC State Physics Education Research Group website. Concept inventories have been developed in Statistics,Chemistry, Astronomy, Basic Biology, Natural Selection, and a number of engineering disciplines [2]. A comparative review of many of these concept inventories can be found in Allen (2006-Chapter XI, on page 442)[3].
There have also been instruments that transcend disciplinary boundaries. For example, Odom and Barrow (1995) have developed a test to specifically evaluate understanding of diffusion and osmosis. In a number of cases, it appears that underlying conceptual issues within a particular discipline have their roots in ideas that transcend disciplinary boundaries. As an example, the idea of randomness impacts conceptual understanding in a wide range of subjects, from molecular motion to a range of evolutionary processes, as described by Garvin-Doxas & Klymkowsky (2008)[4].