I am a professor of Education in the College of Education at the University of Maryland and Director of the Maryland Assessment Research Center for Education Success (MARCES). I got my degree from Syracuse University's psychology department with a specialization in measurement and statistics and the equivalent of an undergraduate major in mathematics. I took a one year post-doc at the Psychometric Laboratory in Chapel Hill and then took an academic position with the University of Georgia's psychology department. After 8 years and promotion to associate professor, I moved in 1978 to the College of Education as professor and chairperson. I was the department chairperson for 26 years and have recently stepped down to return to the life of a faculty member. I have had many great experiences as an administrator, including chairing the campus Senate back in 1992 and chairing numerous campus committees before that time. I have been an Associate Dean for the College of Education developing a management information system and implementing total quality management efforts. The National Council on Measurement in Education and the American Educational Research Association have both asked me to chair a number of committees that have allowed me to provide a national service function. These include the Committee on External Relations, Diversity Relations, and the General Committee on Special Interest Groups. Many years ago, I was elected to Chair the Special Interest Group on Educational Statistics. For 1998-99, I chaired the NCME Awards Committee on Technical Contributions to Measurement Practice and in 2005 I chaired their elections committee.
In addition to all the administrative and national and campus service activities, I also have an active research program that concerns a number of different areas of interest. I have published a number of articles in the area of test equating with Dr. Gary Skaggs, a former student of mine. I have also worked with the National Assessment Governing Board on some related issues of equating state tests and international tests to NAEP results. Another research interest concerns NAEP and the issue of describing points on the performance scale and determining the cut-offs for various passing levels. I have more recently been interested in teacher and school performance and have worked with Dr. Terry Alban on that issue. I am also interested in the problem of interpreting tests that are administered to students who receive accommodations and have several projects in that area. The field of Value Added Modeling is also an area of interest of mine and I recently organized a conference on the topic and am in the midst of doing an edited book based on the conference. The book is being published by JAM press and will be titled "Value Added Models in Education: Theory and Applications."
My recent teaching has been in the area of measurement and evaluation. I regularly teach the "Advanced Evaluation" course in EDMS and have taught a variety of measurement courses, including the "Classroom Assessment" course and the "Intermediate Measurement" course. My work has also included consulting with a number of outside agencies, as for example when I directed the evaluation of the court ordered desegregation effort in St. Louis. I have also worked on test construction efforts for industry and a large variety of assessment issues for the Internal Revenue Service and the Food and Drug Administration. For four years, the Maryland Higher Education Commission funded the Maryland Assessment Resource Center (now known as MARCES), that I continue to direct, to support all 2 and 4 year institutional efforts to incorporate assessment studies and practices in the fabric of their campus.
The most recent MARCES projects that I am responsible for have been renewed for funding for 5 years by the Maryland State Department of Education. We have provided them with advanced psychometric support for the state-wide testing effort in Maryland. Several publications have resulted from this work. Dr. Bill Schafer and Dr. Larry Rudner have worked with me on these projects, as well as a number of graduate students. If you have any questions about my work you can communicate with me at RL27@umail.umd.edu.
Lissitz, R. W. and Jiao, H. (Co-editors, in press) Computers and Their Impact on State Assessment: Recent History and Predictions for the Future. Charlotte: Information Age Publishing, Inc.
Lissitz, R. W. (Editor, 2010) The Concept of Validity: Revisions, New Directions and Applications. Charlotte: Information Age Publishing, Inc.
Li, Yuan and Lissitz, R. W. (2010) Applications of the Analytically Derived Asymptotic Standard Errors of IRT Item Parameter Estimates. Journal of Educational Measurement.
Lissitz, R. W. and Huynh HUYNH. (2003) Vertical Equating for State Assessments: Issues and Solutions in Determination of Adequate Yearly Progress and School Accountability". Practical Assessment Research and Evaluation.
Lissitz, R. W. and Schafer W. D. (2002) Assessment in Educational Reform: Both Means and Ends. Allyn and Bacon.
Walston, J. and Lissitz, R. W. (2000) Computer Mediated Focus Groups. Evaluation Review, 24, 457-483.
Li, Y. & Lissitz, R. W. (2000) An Evaluation of Multidimensional IRT Equating Methods by Assessing the Accuracy of Transforming Parameters onto a Target Test Metric. Applied Psychological Measurement, 24(2), 115-138.
Von Secker, C. and Lissitz, R. W. (1999) Estimating the impact of instructional practices on student achievement in science. Journal of Research on Science Teaching. Vol. 36, No. 10, 1110-1126
Lissitz, R. W. (1997) State Wide Performance Assessment: Continuity, Context and Concerns. Contemporary Education. 69, 1, 15-19.
Lissitz, R. W. & Wainer-Yaffe, M. (1996). On the MARC: The efforts, obstacles, and successes of the Maryland Assessment Resource Center. Assessment Update, 8(3), 10-11.
Lissitz, R. W. & Bourque, M. L. (1995). Reporting NAEP results using standards. Educational Measurement: Issues and Practice, 14(2), 14-23, 31.
Lissitz, R. W. & Schafer, W. D. (1993). Policy-driven assessment: An old phenomenon with new wrinkles. Measurement and Evaluation in Counseling and Development, 16(1), 3-5.
Skaggs, G. & Lissitz, R. W. (1992). The consistency of detecting item bias across independent samples: Implications of another failure. Journal of Educational Measurement, 29, 227-242.