Table of Contents

Misconception: The PVAAS methodology is too complex; a simpler approach to measuring LEA/district, school, and teacher effectiveness would provide better information to educators.

Although conceptually easy, the statistical rigor necessary to provide precise and reliable growth measures requires that several important analytical problems be addressed when analyzing longitudinal student data, which is critically important in any reporting used for educator evaluations.

In short, a simple gain calculation does not provide a reliable estimate of growth for students linked to an educator. Value-added estimates based on simple calculations are often correlated with student characteristics (prior achievement, demographics, or socioeconomic status) served by the educators rather than the educator's effectiveness with those students. Such models often unfairly disadvantage educators serving students with a history of lower achievement and unfairly advantage educators serving students with a history of higher achievement.

However, it is not necessary to be a statistician to understand the educational implications of PVAAS reporting. With the PVAAS web application, educators have a wealth of reports that go beyond a single estimate of student growth and assist in identifying accelerants and impediments to student learning.

PVAAS in Theory

Any student growth or value-added model must address the following considerations in a statistically robust and reliable approach:

  • How to accommodate team teaching or other scenarios where more than one instructor has responsibility for a student's learning.

  • How to dampen the effects of measurement error, which is inherent in all student assessments because the tests themselves are estimates of student knowledge, not an exact measurement.
  • How to accommodate students with missing test scores without introducing major biases by eliminating the data for students with missing scores, using overly simplistic imputation procedures, or using very few test scores for each student.
  • How to use all the longitudinal data for each student when all of the historical data are not on the same scale.
  • How to use historical data when testing regimes have changed over time to provide educational policymakers flexibility.

PVAAS modeling approaches address all of these concerns to provide reliable estimates of student growth, and more details are provided below.

  • PVAAS value-added measures are based multiple years and assessed content areas of performance data (rather than one prior test score) to determine students' academic growth in LEAs/districts, schools, and classrooms. The inclusion of multiple years of data from multiple subjects for each individual student adds to the protection of an educational entity from misclassification in the value-added analysis. More specifically, using so much data at the individual student level can dampen the effect of measurement error, which is inherent in any test score and in all value-added or growth models.
  • PVAAS value-added measures are sophisticated and robust enough to include students with missing data. Since students with a history of lower achievement are more likely to miss tests than students with a history of higher achievement, the exclusion of students with missing test scores can introduce selection bias, which would disproportionately affect educators serving those students.
  • PVAAS value-added measures provide estimates whether, on average, the students fell below, met, or exceeded the established expectation for improvement in a particular grade/subject. Assessing the impact at the group level, rather than on individual students, is a more statistically reliable approach due to the issues with measurement error.
  • PVAAS value-added measures account for the amount of evidence (standard error) when determining whether an educational entity is decidedly above or below the growth standard as defined by the model. Any model based on assessment data relies on estimates of student learning, and it is important that any value-added measure account for the amount of evidence in the growth measure when providing estimates.
  • PVAAS value-added models are sophisticated enough to accommodate different tests or changes in testing regimes. This provides educators with additional flexibility. First, they can use more tests, even if they are on differing scales. Second, they can continue to provide reporting when the tests change.

The statistical models underlying PVAAS have been validated and vetted by a variety of value-added experts. The references below include recent studies by statisticians from the RAND Corporation, a nonprofit research organization:

  • On the choice of a complex value-added model: McCaffrey, Daniel F. and J.R. Lockwood. 2008. "Value-Added Models: Analytic Issues." Prepared for the National Research Council and the National Academy of Education, Board on Testing and Accountability Workshop on Value-Added Modeling, Nov. 13-14, 2008, Washington, DC.
  • On the advantages of the longitudinal, mixed model approach: Lockwood, J.R. and Daniel F. McCaffrey. 2007. "Controlling for Individual Heterogeneity in Longitudinal Models, with Applications to Student Achievement." Electronic Journal of Statistics 1: 223-52.
  • On the insufficiency of simple value-added models: McCaffrey, Daniel F., B. Han, and J.R. Lockwood. 2008. "From Data to Bonuses: A Case Study of the Issues Related to Awarding Teachers Pay on the Basis of the Students' Progress." Presented at Performance Incentives: Their Growing Impact on American K-12 Education, Feb. 28-29, 2008, National Center on Performance Incentives at Vanderbilt University.

It is not just the models that use a sophisticated and robust process; from the moment that PSSA and Keystone assessment data arrive at SAS, they are subjected to a rigorous review to verify that these data are appropriate for value-added analyses. PVAAS uses a sophisticated process in tracking students over time, which accommodates many common data problems at the individual student level, such as missing test scores, duplicate scores, or changing student data.

PVAAS in Practice

Although the statistical approach is robust and complex, the reports in the PVAAS web application are easy to understand. Provided by subject, grade, and year, the value-added estimates are color-coded for quick understanding: dark blue or light blue indicates that students with an LEA/district, school, or teacher made more than the expected growth; green indicates that students with an LEA/district, school, or teacher made about the expected growth; and yellow or red indicates that students with an LEA/district, school, or teacher made less than the expected growth. Educators and admins can identify their strengths and opportunities for improvement at a glance. The reporting is wide-ranging, so that authorized users can also access Diagnostic reports for students by achievement level, individual student-level projections to achievement, and other reports. Educators have a comprehensive view of past practices as well as tools for current and future students. Thus, educators benefit from the rigor of the PVAAS models by gaining insight in an accessible and non-technical format. PVAAS Value-Added reports are customized for Pennsylvania reporting and preferences.

SAMPLE PVAAS SCHOOL VALUE-ADDED REPORT