Science Education Grad Students Receive Emerging Virtual Scholar Award at AERA Conference

COLLEGE PARK, MD (July, 2016) –At the 2016 annual meeting of the American Educational Research Association (AERA), Science Education graduate students Ashley Coon, Kelly Mills, and Xiaoyang Gong received the Emerging Virtual Scholar Graduate Student Award from the Applied Research on Immersive Environments for Learning (ARIEL) special interest group for their poster presentation, “Validating an IVE-based Assessment in Diverse Student Populations Using Traditional Measures of Performance.”

The students co-authored their presentation with Dr. Diane Ketelhut, an associate professor in the Department of Teaching and Learning, Policy and Leadership, and Dr. Brian Nelson of Arizona State University. As lead author, Ashley accepted the award on behalf of her fellow researchers.

The research was conducted as part of Dr. Ketelhut’s SAVE Science grant. SAVE Science is an alternative assessment embedded in an immersive virtual environment, designed to minimize differences based on student culture. Students are assessed based on their performance in solving a scientific challenge.

 

 

Dr. Brian Nelson, Dr. Diane Jass Ketelhut, and Science Education graduate students Ashley Coon, Kelly Mills, and Xiaoyang Gong, with their award-winning presentation.

In order to validate the assessment – that is, to demonstrate that it tests what it is supposed to test – the researchers embedded multiple-choice and open-response questions that were contextualized to the SAVE Science environment. Prior research has shown that, in traditional assessment formats, these types of questions are biased against students outside the dominant culture, e.g. those who are not White, middle or upper class, or native English speakers. Ashley and her fellow researchers wanted to see if this bias persisted even when the questions were delivered via SAVE Science’s non-traditional assessment format.

“These measures – these kinds of questions – are developed by highly-educated, relatively affluent, primarily White people, so they privilege the dominant culture’s speech patterns, experiences, and ways of knowing without realizing they are doing so, or making conscious efforts to consider whether these questions are equally accessible to students from diverse backgrounds,” Ashley says. “There are also more tangible factors: children from more affluent households and schools may have access to tutors and test preparation classes where they are ‘trained’ on how to be successful with these question types.”

“One problem with typical high-stakes tests is that they often presume an understanding of a context that is immaterial to the assessment but crucial to being able to solve a problem,” Dr. Ketelhut adds. “For example, on a question assessing adaptation, one test asked students to explain why a fish with stripes was adapted to a weedy pond. If you’re a rural student, this context makes sense and you can connect to it, but if you live in an urban environment and have never seen a weed-filled pond, you would have to guess that weeds are anchored at the bottom of the pond and float straight up, allowing the fish to blend in.”

Dr. Ketelhut points out that test questions should not require students to “guess” in this way. Any question with variable difficulty for aspects not related to the content being assessed has questionable validity, she says. She also notes that students’ reading levels affect their ability to understand and answer questions, and reading abilities, which also often separate only lines of race and socioeconomic status, are often assumed by test makers, impacting validity.

According to the findings, the bias does persist: despite its non-traditional assessment format, SAVE Science assessment’s validity measures replicate racial/ethnic and socioeconomic achievement gaps observed in traditional assessment formats, with students who are not White and who are from lower socioeconomic communities scoring significantly lower. So, the researchers concluded, alternative assessments may not be able to live up to their promise of leveling the playing field for students from all cultural backgrounds, if they rely upon inherently inequitable measures of validity such as multiple-choice and open-response questions.

“This brought us to a kind of impasse,” Ashley says. “How does one validate an alternative assessment when the validation measures inherently replicate the bias that we are trying to overcome? We hoped to stimulate thinking around the issue by posing this question to the community.”

Click here to read more about SAVE Science in the fall 2014 issue of Endeavors.

Dr. Diane Jass Ketelhut is an associate professor in the Department of Teaching and Learning, Policy and Leadership. She holds certification in secondary school science and was a curriculum specialist and teacher of mathematics and science in grades 5-12 for fifteen years. She earned her Ed.D. in Learning and Teaching from Harvard University.

Dr. Brian C. Nelson is an associate professor with a joint appointment in the Mary Lou Fulton Teachers College and the School of Computing, Informatics, and Decision Systems Engineering at Arizona State University. His research focuses on the theory, design, and implementation of computer-based learning environments.

-end-

For more information on the College of Education, visit: www.education.umd.edu

or contact

Audrey Hill, Associate Director of Communications, at: audreyh@umd.edu