Showing posts with label 2001. Show all posts
Showing posts with label 2001. Show all posts

Wednesday, January 14, 2009

Scherr, Shaffer & Vokos, Am J Phys PER Suppl (2001)

Student understanding of time in special relativity: Simultaneity and reference frames
R. E. Scherr, P. S. Shaffer & S. Vokos, American Journal of Physics, Physics Education Research Supplement, 69, S24-S35 (2001). (link to journal article)

Abstract: This article reports on an investigation of student understanding of the concept of time in special relativity. A series of research tasks are discussed that illustrate, step-by-step, how student reasoning of fundamental concepts of relativity was probed. The results indicate that after standard instruction students at all academic levels have serious difficulties with the relativity of simultaneity and with the role of observers in inertial reference frames. Evidence is presented that suggests many students construct a conceptual framework in which the ideas of absolute simultaneity and the relativity of simultaneity harmoniously co-exist.

Tuesday, January 13, 2009

Elby, Am J Phys PER Suppl (2001)

Helping students learn how to learn
A. Elby, American Journal of Physics, Physics Education Research Supplement, 69(7), S54-S64 (2001). (html version)

Abstract: Students' “epistemological” beliefs—their views about the nature of knowledge and learning—affect how they approach physics courses. For instance, a student who believes physics knowledge to consist primarily of disconnected facts and formulas will study differently from a student who views physics as an interconnected web of concepts. Unfortunately, previous studies show that physics courses, even ones that help students learn concepts particularly well, generally do not lead to significant changes in students' epistemological beliefs. This paper discusses instructional practices and curricular elements, suitable for both college and high school, that helped students develop substantially more sophisticated beliefs about knowledge and learning, as measured by the Maryland Physics Expectations Survey (MPEX) and by the Epistemological Beliefs Assessment for Physical Science.

Hammer & Schifter, Cognition and Instruction (2001

Practices of inquiry in teaching and research
D. Hammer, Cognition and Instruction, 19(4), p 441-478 (2001). (link to journal article)

Abstract: We have three central purposes in this paper. The first is to explore the nature of teacher inquiry, narrowing our focus to inquiry into student learning; the second is to explore what teachers’ inquiries and researchers’ inquiries offer one another; and the third is to consider the similarities and differences between teacher inquiry and research on learning. Although teacher inquiry has much in common with research on learning, and it is essential to recognize the overlap in practice and agenda, teacher inquiry differs from research in ways it would be counter-productive to ignore or to mask. We pursue these objectives by examining the teacher inquiry in two contexts: (1) a conversation among a group of physics teachers about a discussion that took place in one of their classes, and (2) essays by two elementary school teachers about their first- and second-grade students' early reasoning about triangles.

Elby & Hammer, Science Education (2001)

On the substance of a sophisticated epistemology
A. Elby & D. Hammer, Science Education, 85(5), p 554-567 (2001).

Abstract: Among researchers who study students’ epistemologies, a consensus has emerged about what constitutes a sophisticated stance toward scientific knowledge. According to this community consensus, students should understand scientific knowledge as tentative and evolving, rather than certain and unchanging; subjectively tied to scientists' perspectives, rather than objectively inherent in nature; and individually or socially constructed rather than discovered. Surveys, interview protocols, and other methods used to probe students’ beliefs about scientific knowledge broadly reflect this outlook.

Our paper questions the community consensus about epistemological sophistication. We do not suggest that scientific knowledge is objective and fixed; if forced to choose whether knowledge is certain or tentative, with no opportunity to elaborate, we would choose “tentative.” Instead, our critique consists of two lines of argument. First, the literature fails to distinguish between the correctness and productivity of an epistemological belief. For instance, elementary school students who believe that science is about discovering objective truths to questions such as whether the earth is round or flat, or whether an asteroid led to the extinction of the dinosaurs, may be more likely to succeed in science than students who believe science is about telling stories that vary with one's perspective. Naive realism, although incorrect (according to a broad consensus of philosophers and social scientists), may nonetheless be productive for helping those students learn.

Second, according to the consensus view as reflected in commonly-used surveys, epistemological sophistication consists of believing certain blanket generalizations about the nature of knowledge and learning, generalizations that do not attend to context. These generalizations are neither correct nor productive. For example, it would be unsophisticated for students to view as tentative the idea that the Earth is round rather than flat. By contrast, they should take a more tentative stance towards theories of mass extinction. Nonetheless, many surveys and interview protocols tally students as sophisticated not for attending to these contextual nuances, but for subscribing broadly to the view that knowledge is tentative.

Hammer, CSCL2: Carrying Forward the Conversation (2001)

Powerful technology and powerful instruction 
D. Hammer, In CSCL 2: Carrying Forward the Conversation, T. Koschmann, R. Hall, & N. Miyake (Eds.), p 339-403, Mahwah, NJ: Erlbaum (2001).


Monday, January 12, 2009

Bao & Redish, UMD preprint (2001)

Model Analysis: Assessing the Dynamics of Student Learning
L. Bao & E. F. Redish, University of Maryland preprint (Mar 2001).

Abstract: In this paper we present a method of modeling and analysis that permits the extraction and quantitative display of detailed information about the effects of instruction on a class’s knowledge. The method relies on a cognitive model of thinking and learning that represents student thinking in terms of patterns of association in long-term memory structures that we refer to as schemas or mental models. As shown by previous research, students frequently fail to recognize relevant conditions that lead to appropriate uses of their mental models and, as a result, can use multiple models inconsistently to treat problems that appear equivalent to an expert. Once the most common mental models have been determined via qualitative research, they can be mapped onto probing instruments such as a multiple-choice test. We have developed Model Analysis to analyze the results of these instruments that treats the student as if he/she were in a mixed state – a state which, when probed with a set of scenarios under diverse contextual settings, gives the probability that the student will choose a particular mental model to analyze the scenario. We illustrate the use of our method by analyzing results from the Force Concept Inventory, a research-based multiplechoice instrument developed to probe student’s conceptual understanding of Newtonian Mechanics in a physics class. Model Analysis allows one to use qualitative research results to provide a framework for analyzing and interpreting the meaning of students’ incorrect responses on a well-designed research-based multiple-choice test. These results can then be used to guide instruction, either for an individual teacher or for developers of reform curricula.

Bao & Redish, PER Suppl to Am J Phys (2001)

Concentration Analysis: A Quantitative Assessment of Student States
L. Bao & E. F. Redish, Physics Education Research Supplement to the American Journal of Physics, 69, S45-S53 (July 2001).

Abstract: Multiple-choice tests such as the Force Concept Inventory (FCI) provide useful instruments to probe the distribution of student difficulties on a large scale. However, traditional analysis often relies solely on scores (number of students giving the correct answer). This ignores what can be significant and important information: the distribution of wrong answers given by the class. In this paper we introduce a new method, concentration analysis, to measure how students’ responses on multiple-choice questions are distributed. This information can be used to study if the students have common incorrect models or if the question is effective in detecting student models. When combined with information obtained from qualitative research, the method allows us to identify cleanly what FCI results are telling us about student knowledge.