Table of Contents

Teacher Value-Added

Technical Details

How PVAAS Measures Growth

Each year, the academic performance of students is evaluated using a variety of assessments. LEAs/Districts, schools, and teachers receive results from these assessments, which provide important information about the achievement level of their students in tested grades and subjects or Keystone content areas. This information includes the number and percentage of students who performed in each of the state's academic performance ranges—Advanced, Proficient, Basic, and Below Basic. Achievement data from previous years is also included for comparison.

But because the achievement data is based on different groups of students each year, direct comparisons of data across years are often not meaningful or useful. For example, comparing the performance of last year's fifth graders to the performance of this year's fifth graders does not tell us how much academic growth either group of fifth graders made.

We offer a different set of measures. The growth of each group of students is measured as they move from one grade to the next or enter and complete a Keystone course. This approach yields growth measures that are fair, reliable, and useful to educators.

The process begins by generating measures of the average entering achievement level of the group of students served by each teachers, schools, and LEAs/districts. Then a similar measure is generated for the group's average achievement level at the end of the subject and grade or course. To ensure that the measures are precise and reliable, PVAAS incorporates state assessment data across years, grades, and subjects for each student.

The difference between these two achievement measures is calculated and then compared to a standard expectation of growth called the growth standard. Growth color indicators are then assigned to indicate how strong the evidence is that the group of students exceeded, met, or fell short of the growth standard.

Simply put, the expectation is that regardless of their entering achievement level, students should not lose ground academically, relative to their peers in the same grade and subject or course in the reference group. This standard is reasonable and attainable regardless of the entering achievement of the students served.

With this approach, it's possible for a group of students to demonstrate high growth, even if all of them remain in the same state performance level from one year to the next. Each performance level includes a range of scores, so it's possible for a group's average achievement to rise or fall within a single state academic performance level.

There are no individual student measures of growth in PVAAS. This is because there are so fewer data points for an individual student and the error around that measure would be larger, making it difficult to know if a student made growth or not. However, growth measures based on groups of students include much more data, so the impact of measurement error associated with each individual test score is minimized. PVAAS uses many years, grades, and subjects in the analyses. Because so much data is included, the academic growth of the group can be measured with much greater precision and reliability than the growth of a single student. In PVAAS Value-Added reports, a group of 11 students is needed to calculate a measure of growth. In PVAAS Diagnostic reports, a group of five students in any achievement category is needed to calculate a growth measure.

To calculate the growth measures, PVAAS uses two different analytic models, depending on the sequence of assessments administered. The Growth Standard Methodology is used when students are tested with the same assessment in the same subject in consecutive grades. The Predictive Methodology is used for subjects in which students are not tested in consecutive grades and for tests such as end-of-course assessments that students might take in different grades.

Predictive Methodology

Assessments analyzed with the Predictive methodology are not given in the same subject with the same type of assessment in consecutive grades.

This model generates a predicted score for each student. Predicted scores are labeled as Predicted Score because they reflect students' achievement before the current school year or when they entered a grade and subject or Keystone content area.

A predicted score is the score the student would make on the selected assessment if the student makes average or typical growth. To generate each student's predicted score we build a robust statistical model of all students who took the selected assessment in the most recent year. The model includes the scores of all students in the reference group, along with their testing histories across years, grades, and subjects.

By considering how all other students performed on the assessment in relation to their testing histories, the model calculates a predicted score for each student based on their individual testing history.

To ensure precision in the predicted scores, for most subjects, a student must have at least three prior assessment scores. This does not mean three years of scores or three scores in the same subject, but simply three prior scores on state assessments across grades and subjects. There is one exception. To generate a predicted score for fourth-grade science, only two prior scores are required: third-grade math and ELA.

Let's consider an example. Zachary is a high-achieving student who has scored well on state assessments for the past few years, especially in math. To predict Zachary's score on the Keystone assessment, we:

  1. Determine the relationships between the testing histories of all students and their exiting achievement on this assessment in the same year.
  2. Use these relationships to determine what the expected score would be for Zachary, given his own personal testing history.

Based on Zachary's testing history, a score at the 83rd percentile would be a reasonable expectation for him.

In contrast, Adam is a low-achieving student who has struggled in math. His prior scores on state assessments are low. Just as with Zachary, we use the relationships between the testing histories of all students and their exiting achievement on the assessment statewide to determine a predicted score for Adam. Based upon Adam's own personal testing history, a score at the 26th percentile would be a reasonable expectation for him.

Once a predicted score has been generated for each student in the group, the predicted scores are averaged. Because this average predicted score is based on the students' prior test scores, it represents the entering achievement in this subject for the group of students.

Next, we compare the students' exiting achievement on the assessment to their entering achievement. If a group of students scores what they were predicted to score, on average, we can say that the group made average, or typical, growth. In other words, their growth was similar to the growth of students at the same achievement level across the reference group. This is the definition of meeting the growth standard in the predictive methodology.

If a group of students scores significantly higher than predicted, we can conclude that the group made more growth than their peers across the reference group. If a group scores significantly lower than predicted, the group did not grow as much as their peers.

The growth measure is a function of the difference between the students' predicted score and their exiting achievement. This value is expressed in scale score points and indicates how much higher or lower the group scored, on average, compared to what they were expected to score given their individual testing histories. For example, a growth measure of 9.3 indicates that, on average, this group of students scored 9.3 scale score points higher than expected. When generating growth measures for Teacher Value-Added reports, students are weighted for each teacher based on the proportion of instructional responsibility claimed during roster verification.

Calculating the Growth Index

The standard error is used in conjunction with the growth measure to calculate the growth index. Specifically, the growth index is the growth measure divided by its standard error. This calculation yields a robust measure of growth for the group of students that reflects both the growth and the amount of evidence. All index values are on the same scale and can be compared fairly across years, grades, and subjects throughout the reference group.

Each Growth Index is color-coded to indicate how strong the evidence is that students exceeded, met, or fell short of the growth standard. The colors should be interpreted as follows:

Growth Color IndicatorGrowth Index Compared to the Growth StandardInterpretation
Well Above

At least 2 standard errors above

Significant evidence that the teacher's group of students exceeded the growth standard

Above

Between 1 and 2 standard errors above

Moderate evidence that the teacher's group of students exceeded the growth standard
Meets

Between 1 standard error above and 1 standard error below

Evidence that the teacher's group of students met the growth standard
Below

Between 1 and 2 standard errors below

Moderate evidence that the teacher's group of students did not meet the growth standard
Well Below

More than 2 standard errors below

Significant evidence that the teacher's group of students did not meet the growth standard
When a growth index falls exactly on the boundary between two colors, the higher growth color indicator is assigned.

More Information

Self-Reflection Guide for PVAAS Teacher Reporting

Making Sense of NCEs and Standard Errors