Table of Contents

Misconception: PVAAS reporting is not reliable or valid since it is based only on the PA state assessments.

Educators might be concerned that value-added reporting relies on the use of standardized tests, which have limitations themselves. Perhaps they feel that the test does not correlate well with the curriculum or that there isn't sufficient stretch to measure progress of very low- or high-achieving students. However, PVAAS estimates use a sophisticated modeling approach to address many of the concerns of using standardized tests, and SAS reviews the test scores annually to ensure that they are an appropriate use for PVAAS value-added reporting.

PVAAS in Theory

Student test scores are the basic ingredient of all PVAAS analyses. PVAAS is not involved in and has no control over test construction. Pennsylvania's assessment system performs a universal assessment of Pennsylvania standards, and the assessments are aligned to the appropriate standards that are sufficient for longitudinal modeling and prediction. Regardless, before using any tests in PVAAS modeling, rigorous data processing and analyses verify that the tests meet the following three criteria:

  • They demonstrate sufficient stretch so that both low-achieving and high-achieving students can show growth.
  • They are aligned to state curriculum standards.
  • The scales are sufficiently reliable from year to year.

To date, Pennsylvania's state assessments have met these criteria. More specifically, PVAAS analyses verify that there are enough different scaled scores at the top and bottom of the scales to differentiate student achievement. This processing also analyzes the percentage of students scoring at the top and bottom scores to ensure there are no ceilings or floors. After all analyses are completed and PVAAS growth measures are available, SAS verifies that LEAs/districts, schools, and teachers serving both high- and low-achieving students can show both high and low growth. This process is repeated every year.

Another common concern of educators is that they might be held accountable for how students did on a single test on a given day. PVAAS understands this concern and agrees that any single score just represents a snapshot of student performance at a particular point in time. However, the use of many test scores across subjects, grades, and years in PVAAS can provide a more complete picture of student learning and how students' achievement has changed over time.

PVAAS in Practice

Actual data may be the most readily apparent way to show that there is sufficient stretch in Pennsylvania's state assessments to measure the growth of low-, middle-, and high-achieving students. The figure below plots the average achievement for each teacher's students in Pennsylvania against the growth index (the value-added growth measure divided by its standard error) for PSSA Mathematics in grade 5 in 2019. Each dot represents one teacher, and verified rosters were used where available. The figure demonstrates that teachers serving both high- and low-achieving students can show both high and low growth, as measured by PVAAS. LEA/District and school value-added plots are similar to the teacher plot shown below.

PENNSYLVANIA GROWTH INDEX VERSUS AVERAGE ACHIEVEMENT BY TEACHER