Maryland Assessment Research Center (MARC)

Projects

Project 25 - Empirical Investigation of Maryland Student High School Academic Performance Indicators as College and Career Readiness Measures

MSDE requested MARC to conduct a study to identify high school academic performance indicators which can be the best predictors of college and career success. Thus, the purpose of this study was to explore the relationship between high school academic performance measures and actual success in postsecondary coursework. The high school academic performance indicators include state and national standardized tests and other measures of academic achievement in high school. The results are intended to inform the adoption of college and career readiness standards during the course of high school education in the state of Maryland.

MARC-CCRExploratory-FinalReport.pdf

 

Project 24 - PARCC Achievement and Post-Secondary Outcomes

This research study is to investigate the relationships between post-secondary academic outcomes and PARCC test scores as well as college admission test (SAT and ACT) scores.

 

Executive Summary

 

Project 23 - SAT Achievement and Post-Secondary Outcomes

This research study is to investigate the relationships between post-secondary academic outcomes and SAT test scores.

 

Executive Summary

 

Project 22 - Summary of Performance Level Descriptor (PLD) Standard Setting Methods

Project 21 - Summary of State Growth Measures

Growth measures used by states or consortiums are summarized.

Growth Measure Summary

Project 20 - Summary of State Science Assessments

Science assessments used by states are summarized.

Science Assessment Summary

Project 19 - Combined Score Options for High School Graduation Assessment

Project 18 - The Relationship between the PARCC Test Scores and the College Admission Tests: SAT/ACT/PSAT with 2017 PARCC Data

The purpose of this research project is to explore the relationships between the PARCC test scores and the college admission test scores. A study was carried out investigating the relationships between the PARCC test scores and their respective college admission test scores in the same content area using the 2016 PARCC data. The current project replicates the study using the 2017 PARCC ALG01, ALG02, ELA10 tests, SAT, ACT, and PSAT data.

Project 17 - Investigating the Concordance Relationship between the HSA Cut Scores and the PARCC Cut Scores Using the 2017 PARCC Test Data

The purpose of this study is to conduct a replication investigation using the same linking methods as in our 2016 study and the 2017 PARCC test data to obtain the PARCC equivalents of the HSA cut scores and the HSA equivalents of the PARCC cut scores. The two options we explored are listed as follows.
1. Option I: Using PSAT as an external common test to link HSA and PARCC tests via two-step linking. 
2. Option II: Using the propensity score matching method to come up with matched equivalent groups so that the equivalent group equipercentile linking method can be used to map the HSA cut scores onto the PARCC scales directly, and vice versa. 
Comparisons between PARCC cut scores from the 2016 study and this study were also performed.

Project 16 - Investigating the Concordance Relationship between the HSA Cut Scores and the PARCC Cut Scores Using the 2016 PARCC Test Data

The purpose of this study is to conduct a replication investigation using the same linking methods as in our 2015 study and the 2016 PARCC test data to obtain the PARCC equivalents of the HSA cut scores and the HSA equivalents of the PARCC cut scores. The two options we explored are listed as follows.
1. Option I: Using PSAT as an external common test to link HSA and PARCC tests via two-step linking. 
2. Option II: Using the propensity score matching method to come up with matched equivalent groups so that the equivalent group equipercentile linking method can be used to map the HSA cut scores onto the PARCC scales directly, and vice versa. 
Comparisons between PARCC cut scores from the 2015 study and this study were also performed.

 

Technical Report

 

 

Project 15 - The Relationship between the PARCC Test Scores and the College Admission Tests: SAT/ACT/PSAT with 2016 PARCC Data

The purpose of this research project is to explore the relationships between the PARCC test scores and the college admission test scores. A study was carried out investigating the relationships between the PARCC test scores and their respective college admission test scores in the same content area using the 2015 PARCC data. The current project replicates the study using the 2016 PARCC ALG01, ALG02, ELA10 tests, SAT, ACT, and PSAT data.

Project 14 - Investigating the Concordance Relationship between the MSA and PARCC Test Scores Using Propensity Score Matching and Extrapolation Methods: Using 2016 PARCC Test Data

The purpose of this study is to obtain the PARCC equivalents of the MSA cut scores using 2016 assessment data.To map the MSA cut scores to the PARCC scales, the MARC team previously conducted a linking study to investigate the concordance relationship between the PARCC and MSA tests for both math and English/reading at each grade level using 2015 data. This current study uses 2016 assessment data to investigate the same concordance relationships. Two options were explored in this study. 
1. Option I: Using the propensity score matching (PSM) method to form matched equivalent groups and performing equipercentile linking based on these equivalent groups.
2. Option II: Using data on the percentages of test takers being classified into different performance levels for MSA from 2004 test administration to 2012 test administration at each grade to extrapolate and predict the trend for 2015 and 2016 tests.

 

Technical Report

 

Project 13 - Investigating the Concordance Relationship between the MSA and PARCC Scores Using Propensity Score Matching and Extrapolation Methods

This study obtains the PARCC equivalents of the MSA cut scores. More specifically, the MSA Math cut scores for being proficient and advanced need to be mapped onto the PARCC Math scales, and the MSA Reading cut scores for being proficient and advanced need to be mapped onto the PARCC ELA scales at each grade level from grades 3 to 8. To map the MSA cut scores to the PARCC scales, two options were explored. 
1. Option I: Using the propensity score matching (PSM) method to form matched equivalent groups and performing equipercentile linking based on these equivalen groups.
2. Option II: Using data on the percentages of test takers being classified into different performance levels for MSA from 2004 test administration to 2012 test administration at each grade to extrapolate and predict the trend for 2015 tests.

 

Technical Report

 

Project 12 - Investigating the Concordance Relationship between the HSA Cut Scores and the PARCC Cut Scores

The purpose of this study is to obtain the PARCC equivalent of the HSA cut score and the HSA equivalent of the PARCC cut score that divides performance level 2 from 3. Two options were explored to create the condordance tables.
1. Option I: Using PSAT as an external common test to link HSA and PARCC tests via two-step linking. 
2. Option II: Using the propensity score matching method to come up with matched equivalent groups so that the equivalent group equipercentile linking method can be used to map the HSA cut scores onto the PARCC scales directly, and vice versa.

 

Technical Report

 

Project 11 - The Relationship between the PARCC Test Scores and the College Admission Tests: SAT/ACT/PSAT

The purpose of this research project is to explore the relationships between the PARCC test scores and the college admission test scores. Given that the MARC team received data with anonymous student identifiers and the sample sizes after matching students were adequate, data analyses have been performed for the PARCC ALG01, ALG02, ELA10 tests, SAT, ACT, and PSAT to find the relationships between the PARCC test scores and their respective college admission test scores in the same content area.

Project 10 - Proper Use of Assessment Results from Common Core State Standards
Since the Common Core State Standards (CCSS) were released in 2010, nearly every U. S. state has formally adopted these standards in mathematics and English language arts (ELA) and many joined one of two Consortia to develop and implement common tests. Given the high stakes associated with the use of scores from the Common Core Assessment as well as the need for proper diagnosis of student learning, this report focuses on a discussion of three fundamental concerns: (1) score comparability and DIF for multiple groups, (2) the selection of software packages for multiple-group IRT analysis, and (3) CDMs.

 

Project 9 - Score Comparability and Differential Item Functioning

Under the Common Core State Standards (CCSS), tests developed by each consortium are based on the same common core standards; however, states within one consortium may adopt different curriculum and instruction and student populations from different states could be very diverse. As acknowledged by PARCC, test score comparability across states is an important issue to be addressed. In this part, we will discuss briefly methods for detecting DIF items across multiple groups as well as multiple-group IRT models for dealing with DIF.

 

Project 8 - Software Packages For Multiple Group IRT Analysis and Accuracy of Parameter Estimates

In this report, we compare several IRT software packages for multiple group analysis including BILOG-MG, MULTILOG, IRTPRO, flexMIRT, Mplus, BMIRT, and FLIRT (R package).  Given that different software programs employ different defaults and/or options for model identification and commonality, this report provides information on these two issues (model identification and commonality). This review focuses on use of software programs for multiple group IRT analysis in the context where the same test form is administered to different groups.

 

Project 7 - Cognitive Diagnostic Models

The purpose of this document is to provide theoretical background on cognitive diagnostic models by first explaining some technical terminology and then providing an overview of the models that could be used in practice. The purpose of this is to prime the way for more analyses regarding these types of diagnostic assessments, in an overall effort to provide background information about commonly studied diagnostic models that could be useful for the stakeholders of such tests, particularly in the state of Maryland. Ultimately, we hope that this review will shed light on the models that will be most useful for giving students and teachers accurate and useful information for the use of formative assessments as proposed by the CCSSO, PARCC, and Maryland public schools.

 

Project 6 - Issues and Considerations regarding Linking between Old and New Assessment

Beginning in the fall of 2014, according to new standards set forth by the Common Core State Standards initiative (CCSS), the state of Maryland, under the Partnership for Assessment of Readiness for College and Careers (PARCC), will replace the Maryland School Assessment (MSA) with PARCC assessment, which will differ in content coverage, scope and sequence and psychometric properties, to name a few. Such discrepancies between new and old assessments require a careful linking study in order to compare the two assessments and correctly measure progress. This outline was created to address problems and suggest different linking designs.

 

Project 5 - Issues and Considerations Regarding Standard Setting

Standard setting is a process of setting cut scores on a test. It  indicates whether a student has achieved an established level of proficiency. As a new summative assessment for new common core standards will be implemented in the 2014-2015 school year by two consortia (PARCC and Smarter Balanced), standard setting for a new common core assessment will be conducted. Because there are many distinct differences between the assessments by the two consortia, the process of setting performance standards will not be the same. This outline was created to provide detailed differences in standard setting by the two consortia and the related issues and consideration as well as a general description on standard setting.

 

Project 4 - Issues and Technical Considerations Related to Transitioning to a New Test According to Common Core State Standards (CCSS)

An executive summary is conducted to identify the issues and the corresponding technical suggestions related to adopting the new national testing effort in Maryland. The issues we identified fall into four big categories, including psychometric related issues (e.g., scaling, linking, and DIF), technology related (e.g., readiness and security), implementation related (e.g., test delivery, scoring, and reporting) and policy related (e.g., student growth and evaluation of teacher effectiveness). For each issue that we identified, potential technical considerations are provided.

 

Project 3 - Context Effect on Item Parameter Invariance

Context effects occur when item parameters are influenced by item location, order effects, or the characteristics of other items in a test. Though a large amount of research on context effect showed changes in item positions can have great impact on both item parameter estimates and the subsequent equating results, inconsistent findings showed context effects did not always significantly affect item difficulty or item discrimination. Based on a thorough literature review, this project summarized the research findings on item parameter invariance as well as equating under the influence of context effects. Recommendations from literature on test construction or development were also provided.  

 

Project 2 - The Use of Student Learning Objectives (SLOs) in Teacher Evaluation

Students’ achievement data have been increasingly used to assist in teacher effectiveness evaluation. Using only state test data means that a large part of the teaching force would not be able to participate in evaluations. The Student Learning Objective (SLO) is an option to incorporate student performance results into the evaluation of teachers in subjects and grades not assessed by state standardized tests. This executive summary investigated how SLOs are used for teacher evaluation at both the state and the district level and suggested what consideration states or districts need to address as they evaluate teachers’ performance using SLOs. Examples of the pilot use of SLOs in teacher evaluation at both the state level and the district level were provided.

 

Project 1 - Student Characteristics and CBT Performance: An Overview of the Literature

One big change in the field of education and assessment under the influence of modern technology is the transition from paper-based to computer-based assessment.  Computer-based testing (CBT) is gaining popularity over the traditional paper-and-pencil test  (PPT) due to many advantages that computer-based assessment provides.  Meanwhile, more and more educators and researchers have shown interest in investigating the factors that influence students’ CBT performance. That is, for whom is CBT best suited?  Or, what student characteristics are important in effective use of CBTs? The objective of this project was to examine the relationship between student characteristics and CBT, compared with PPT.  In the literature, factors related to student characteristics, such as student demographic attributes, learning style, computer familiarity and test anxiety, were found to have somewhat different relations with CBT performance compared with PPT.  

Student Characteristics and CBT Performance / Annotation-Abstract and Key Points/ Reference for the Literature Review