I received my doctoral degree from the University of Rochester in 1969 and joined the faculty of the University of Maryland, College Park that same year. Over the 30-plus years I was with the Department of Measurement, Statistics, and Evaluation (EDMS) at the University of Maryland, I typically taught applied statistics courses through experimental design and multiple regression, classroom assessment, and advanced research methods courses.

My research in the 1970's was mostly oriented toward applied statistics, particularly simultaneous inference and, within that, variations on the Bonferroni technique. I also worked on measurement of dispersion in categorical data, estimation of parameters for normal mixture models, and estimating reliability for tests that are explicitly composite in nature. In the 1980's, I became interested in assessment education for classroom teachers and other school professionals. My work also included developing a statistics package for the Apple II series of personal computers (at one time, the package was heavily used and distributed nationally along with a textbook, but usage along with the Apple II’s has of course fallen off). In the 1990's, I became interested in meta-analysis, but most of my attention continues to be focused on applied assessment and accountability in educational contexts. I am former editor of Measurement and Evaluation in Counseling and Development and am currently on the editorial board of Applied Measurement in Education. I am currently co-editor of Practical Assessment, Research & Evaluation, an electronic journal available at http://pareonline.net/.

Believing that much of what we know becomes translated into effective practices through our professional associations, I became active in service activities at both the national and local levels. I have been president of the Association for Assessment in Counseling and Education (AACE) and the Educational Statisticians Special Interest Group (SIG) of the American Educational Research Association (AERA). I was founding chair of AERA’s Classroom Assessment SIG and have been chair of AERA's Professors of Educational Research SIG. I was a representative of the American Counseling Association (ACA) to the Joint Committee on Testing Practices (JCTP) and was co-chair of the working group of the JCTP that prepared its statement, Test-Taker Rights and Responsibilities. I chaired the committee of the AACE that developed its Statement on Legislation Affecting Testing for Selection in Educational and Occupational Programs, chaired the committee of the ACA and the AACE that revised their statement, Responsibilities of Users of Standardized Tests, and was a member of the committee of the National Council on Measurement in Education (NCME) that wrote the Code of Professional Responsibilities in Educational Measurement. In the area of assessment education, I was chair of a committee of AACE and American School Counselors Association members that developed their joint statement, Standards for Assessment Education for School Counselors and was chair of an NCME committee that made recommendations to NCME about assessment education for teachers. Locally, I was president of the Maryland Personnel and Guidance Association and served several years as treasurer of the Maryland Assessment Group.

From September, 1997 to August, 1999 I served as Director of Student Assessment with the Maryland State Department of Education, a full-time appointment under a contractual arrangement with the University of Maryland. The Assessment Branch was responsible for administering the Maryland School Performance Assessment Program, a yearly performance assessment given to all Maryland public-school third, fifth, and eighth graders in the areas of reading, writing, language usage, social studies, science, and mathematics. Along with the Maryland Functional Testing Program in citizenship, mathematics, reading, and writing, and its required nationally normed commercial test, our group also made progress on developing the Maryland High School Assessment program, a series of end-of-course tests that eventually replaced the functional assessments as requirements for high school graduation.

I retired on July 1, 2000 but continue to work on educational issues as an Affiliated Professor (Emeritus) in EDMS. Among other things, I participate in applied research efforts with the EDMS Department’s Maryland Assessment Research Center for Education Success (MARCES); contribute, typically as a Technical Advisory Committee member, to testing programs in Maryland and several other states; and frequently consult with the U. S. Department of Education and other national groups in the areas of assessment and accountability. Although I am affiliated with the University, I no longer live in Maryland; the best way to contact me is through e-mail.

In 2001 I received a Citation for Research Merit from the AERA SIG: Classroom Assessment. In 2006 I was co-recipient of the American Educational Research Association’s Measurement and Research Methodology Division Award for Significant Contribution to Educational Measurement and Research Methodology for work published in a special issue of Applied Measurement in Education on vertically moderated standards. In 2008 I received a Distinguished Reviewer Award from the Buros Institute of Mental Measurement. In 2010 I was named Fellow of the American Counseling Association, having been nominated by one of its divisions, the Association for Assessment in Counseling and Education.

Schafer, W. D., Lissitz, R. W., Zhu, X., Zhang, Y., Hou, X., & Li, Y. (2012). Evaluating Teachers and Schools Using Student Growth Models. Practical Assessment, Research & Evaluation, 17(17). [Available online: http://pareonline.net/getvn.asp?v=17&n=17].

Schafer, W. D. (2011). Developing instructionally sensitive formative assessments. Eleventh Annual Maryland Assessment Conference: Informing the Practice of Teaching Using Formative and Interim Assessment: A Systems Approach, October 20.

Schafer, W. D., Coverdale, B. J., Luxenberg, H., & Jin, Y. (2011). Quality Control Charts in Large-Scale Assessment Programs. Practical Assessment, Research & Evaluation, 16(15). [Available online: http://pareonline.net/getvn.asp?v=16&n=15].

Schafer, W. D. & Hou, X. (2011). Test Score Reporting Referenced to Doubly-Moderated Cut Scores Using Splines. Practical Assessment, Research & Evaluation, 16(13). [Available online: http://pareonline.net/getvn.asp?v=16&n=13].

Schafer, W. D. (2011). Aligned by design: A process for systematic alignment of assessments to educational domains. In Schraw, G., & Robinson, D. R. (Eds.). Assessment of higher order thinking skills (pp. 395-418). New York, NY: Information Age Publishers.

Schafer, W. D., Coverdale, B., Luxenberg, H., & Jin, Y. (2011). Quality Control Charts in Large-Scale Assessment Programs. National Council on Measurement in Education Convention, New Orleans, April 10.

Schafer, W. D. (2010). Review [of The California Measure of Mental Motivation]. In R. A. Spies, J. F. Carlson, and K. F. Geisinger (Eds.), The eighteenth mental measurements yearbook (pp. 89-90). Lincoln, NE: Buros Institute of Mental Measurements.

Schafer, W. D. (2010). Review [of The Wide Range Achievement Test Fourth Edition Progress Monitoring Version]. In R. A. Spies, J. F. Carlson, and K. F. Geisinger (Eds.), The eighteenth mental measurements yearbook (pp. 722-724). Lincoln, NE: Buros Institute of Mental Measurements.

Schafer, W. D. (2010). Remarks on evolving testing programs. National Council on Measurement in Education Conference, Denver, May 2.

Schafer, W. D., Wang, J. and Wang, V. (2009). Validity in action: State assessment validity evidence for compliance with NCLB. In R. W. Lissitz (Ed.) The concept of validity (pp. 173-193). Charlotte, NC: Information Age.

Schafer, W. D. and Lissitz, R. W. (Eds.) (2009). Alternate assessments based on alternate achievement standards. Baltimore: Brookes Publishing.

Hall, S. E., Kehe, M. D. and Schafer, W. D. (2009). The alternate Maryland school assessment. In W. D. Schafer & R. W. Lissitz (Eds.) Alternate assessments based on alternate achievement standards (pp. 189-220). Baltimore: Brookes Publishing.

Schafer, W. D. (2009). Principles unique to alternate assessments. In W. D. Schafer & R. W. Lissitz (Eds.). Alternate assessments based on alternate achievement standards (pp. 365-368). Baltimore: Brookes Publishing.

Schafer, W. D. (2009). Studying Impacts of Assessment and Accountability Programs. Large-Scale Assessment Conference of the Council of Chief State School Officers, Los Angeles, June 24.

Knapp, T. R. & Schafer, W. D. (2009). From gain score t to ANCOVA F (and vice versa). Practical Assessment Research & Evaluation, 14(6) [Available online: http://pareonline.net/getvn.asp?v=14&n=6].

Schafer, W. D. (2009). Review [of Employability Competency System Appraisal Test (ECS Appraisal) of the CASAS – Comprehensive Adult Student Assessment Systems]. In E. A. Whitfield, R. W. Feller, & C. Wood (Eds.), A counselor’s guide to career assessment instruments (5th ed.) (pp. 137-143). Broken Arrow, OK: National Career Development Association.

Schafer, W. D., Wang, J., and Wang, V. (2008). Validity in action: State assessment validity evidence for compliance with NCLB. Ninth Annual Maryland Assessment Conference: The Concept of Validity: Revisions, New Directions and Applications, College Park, October 9.

Schafer, W. D. (2008). Setting state standards for school change: A commentary on Arkansas’s experience. National Council on Measurement in Education Convention, New York City.

Schafer, W. D. (2008). Replicated field study design. In J. W. Osborne (Ed.), Best practices in quantitative methods (pp. 147-154). Thousand Oaks, CA: Sage.

Moody, M., Schafer, W. D., and Seikaly, L. (2007). Implementing cognition-based learning goals in the classroom: The state role. In R. Lissitz (Ed.) Assessing and modeling cognitive development in school (pp. 205-216). Maple Grove, MN: JAM Press.

Schafer, W. D. (2007). Review [of The Test of Everyday Attention]. In K. F. Geisinger, R. A. Spies, J. F. Carlson, & B. S. Plake (Eds.), The Seventeenth Mental Measurements Yearbook (pp. 788-790). Lincoln, NE: Buros Institute of Mental Measurements.

Schafer, W. D. (2007). Principles unique to alternate assessments. Eighth Annual Maryland Assessment Conference: Alternate Assessment, College Park, October 12.

Hall, S., Kehe, M. & Schafer, W. D. (2007). State exemplar: Maryland’s alternate assessment using alternate achievement standards. Eighth Annual Maryland Assessment Conference: Alternate Assessment, College Park, October 11.

Schafer, W. D., Liu, M. & Wang, H (2007). Content and Grade Trends in State Assessments and NAEP. Practical Assessment Research & Evaluation, 12(9) [Available online: http://pareonline.net/getvn.asp?v=12&n=9].

Schafer, W. D. (2007). Multiple measures: The sources matter. Washington, DC: National Education Association, August 13.

Schafer, W. D. (2007). The need for assessment limits in describing learning domains. Measurement, 4, 258-261.

Schafer, W. D., Liu, M. & Wang, H. (2007). Cross-grade comparisons among statewide assessments and NAEP. American Educational Research Association convention, Chicago [Available online: http://marces.org/presentation/State_Comparisons_Paper07_05_18.pdf].

Schafer, W. D. (2007). Comments on setting performance standards for schools in accountability programs: Policy, technical, and operational issues. National Council on Measurement in Education Convention, Chicago.

Moody, M. & Schafer, W. D. (2006). Implementing cognition-based learning goals in classrooms: The state role. Conference on Assessing and Modeling Cognitive Development in School: Intellectual Growth and Standard Setting, University of Maryland, College Park, MD, October 19.

Schafer, W. D. (2006). Vertical and growth scales. Council of Chief State School Officers Conference on Large-Scale Assessment, San Francisco.

Schafer, W. D. (2006). Growth scales as an alternative to vertical scales. Practical Assessment Research & Evaluation, 11(4). [Available online: http://pareonline.net/getvn.asp?v=11&n=4].

Schafer, W. D. & Twing, J. S. (2006). Growth scales and pathways. In Lissitz, R. W. (Ed.), Longitudinal and value-added models of student performance. Maple Grove, MN: JAM Press, pp. 321-345.

Lissitz, R. W., Doran, H., Schafer, W. D., & Wilhoft, J. (2006). Growth modeling, value added modeling, and linking: An introduction. In Lissitz, R. W. (Ed.), Longitudinal and value-added models of student performance. Maple Grove, MN: JAM Press, pp. 1-46.

Schafer, W. D. (2005). Comments on school-level database design and use. National Research Council Symposium on Use of School-Level Data to Evaluate Federal Education Programs, Washington, DC, December 9.[Available online: http://www7.nationalacademies.org/bota/School-Level%20Data_%20Schafer-Remarks.pdf].

Schafer, W. D. & Twing, J. S. (2005). Growth scales and pathways. Conference on Longitudinal Modeling of Student Achievement, University of Maryland, College Park, MD, November 8.

Schafer, W. D. (2005). Review [of the Kaufman Iowa Tests of Educational Development, Forms A and B]. In R. A. Spies & B. S. Plake (Eds.), The Sixteenth Mental Measurements Yearbook (pp. 488-491). Lincoln, NE: Buros Institute of Mental Measurements.

Schafer, W. D. (2005). Review [of the Science Assessment Series 1 and 2]. In R. A. Spies & B. S. Plake (Eds.), The Sixteenth Mental Measurements Yearbook (pp. 918-920). Lincoln, NE: Buros Institute of Mental Measurements.

Schafer, William D. (2005). Technical documentation for alternate assessments. Practical Assessment Research & Evaluation, 10(10).[Available online: http://pareonline.net/getvn.asp?v=10&n=10].

Li, Y. H. & Schafer, W. D. (2005). Increasing the homogeneity of CAT's item-exposure rates by minimizing or maximizing varied target functions while assempling shadow tests. Journal of Educational Measurement, 42(3), 245-269.

Schafer, W. D., Gagne, P. & Lissitz, R. W. (2005). Resistance to confounding style and content in scoring constructed-response items. Educational Measurement: Issues and Practice, 24(2), 22-28.

Schafer, W. D., Papapolydorou, M., Rahman, T., & Parker, L. (2005). Effects of test administrator characteristics on achievement test scores. National Council on Measurement in Education Convention, Montreal.). ERIC Document ED510145. [Available online: http://marces.org/files/NCMESchaferPaperFNL3%5B1%5D%5B1%5D.8.05.doc].

Schafer, W. D. (2005). Criteria for standard setting from the sponsor's perspective. Applied Measurement in Education, 18(1), 61-81.

Li, Y. H. & Schafer, W. D. (2005). Trait parameter recovery using multidimensional computerized adaptive testing in reading and mathematics. Applied Psychological Measurement, 29(1), 3-25.

Schafer, W. D. (2004). Review [of Tindal, G. & Haladyna, T. M. (Eds.), Large-Scale Assessment Programs for All Students: Validity, Technical Adequacy, and Implementation. Mahwah, NJ: Lawrence Erlbaum Associates]. Contemporary Psychology, 49, 622-625.

Schafer, W. D. (2004). Informing test takers. In Wall, J. E. & Walz, G. R. (Eds.) Measuring up: Assessment issues for teachers, counselors, and administrators. Greensboro, NC: CAPS Press, pp. 65-78.

Schafer, W. D. (2004). Accountability and school improvement: Helping the public understand the statistics behind one state's formula. In Boston, C, Rudner, L. M., Walker, L. J., & Crouch, L. (Eds.) What reporters need to know about test scores. Washington, DC: Education Writers Assoc.

Li, Y. H. & Schafer, W. D. (2004). The context effects of multidimensional CAT on the accuracy of multidimensional abilities and item exposure rates. American Educational Research Association Convention, San Diego.

Schafer, W. D. (2004). Standard setting from a state perspective. National Council on Measurement in Education Convention, San Diego. Ekstrom, R. B., Elmore, P. B., Schafer, W. D., Trotter, T. V., & Webster, B. (2004). A survey of assessment and evaluation activities of school counselors. Professional School Counseling, 8(1), 24-30.

Schafer, W. D. (2004). Standard setting from the sponsor's perspective. Council of Chief State School Officers Conference on Large-Scale Assessment, Boston.

Schafer, W. D. & Moody, M. (2004). Designing accountability assessments for teaching. Practical Assessment, Research & Evaluation, 9(14). [Available online: http://pareonline.net/getvn.asp?v=9&n=14].

Schafer, W. D. (2004). Bonferroni technique. In M. S. Lewis-Beck, A. Bryman & T. F. Liao (Eds.), The Sage encyclopedia of social science research methods: Vol 1. Thousand Oaks, CA: Sage Publications.

Li, Y. H. & Schafer, W. D. (2003). The effect of item selection methods on the variability of CAT's ability estimates when item parameters are contaminated with measurement errors. National Council on Measurement in Education Convention, Chicago.

Li, Y. H. & Schafer, W. D. (2003). Accuracy of reading and mathematics ability estimates under the shadow-test constraint MCAT. American Educational Research Association Convention, Chicago.

Li, Y. H. & Schafer, W. D. (2003). Increasing the Homogeneity of CAT's Item-Exposure Rates by Minimizing or Maximizing Varied Target Functions While Assembling Shadow Tests. American Educational Research Association Convention, Chicago.

Schafer, W. D. & Moody, M. (2003). Designing accountability assessments for teaching. National Council on Measurement in Education Convention, Chicago.

Lissitz, R. W., Schafer, W. D., & Gagne, P. (2003). The effect of confounding writing style with writing content in constructed-response items. National Council on Measurement in Education Convention, Chicago.

Schafer, W. D. (2003). A state perspective on multiple measures in school accountability. Educational Measurement: Issues and Practice, 22(2), 27-31.

Schafer, W. D. & Moody, M. (2003). Designing accountability assessments for teaching. Council of Chief State School Officers Conference on Large-Scale Assessment, San Antonio.