- Using PVAAS for a Purpose
- Key Concepts
- PEERS
- About PEERS
- Understanding the PEERS pages
- Evaluation List
- Evaluation Summary
- Evaluation Forms
- Add Educator
- Add Evaluator
- Manage Access
- Add a school-level Educator to PEERS
- Add a district-level Educator to PEERS
- Add the Evaluator permission to a user's account
- Remove the Evaluator permission from a district user's account
- Add the Evaluator or Administrative Evaluator permission to a district user's account
- Remove the Administrative Evaluator permission from a district user's account
- Remove an Educator from PEERS
- Restore a removed Educator
- Assign an Educator to a district-level Evaluator
- Assign an Educator to an Evaluator
- Unassign an Educator from an Evaluator
- Assign an Educator to a school
- Unassign an Educator from a school
- Link a PVAAS account to an Educator
- Working with Evaluations
- Switch between Educator and Evaluator
- View an evaluation
- Use filters to display only certain evaluations
- Print the Summary section of an evaluation
- Understanding evaluation statuses
- Determine whether other evaluators have access to an evaluation
- Lock or unlock an evaluation
- Save your changes
- Mark an evaluation as Ready for Conference
- Release one or more evaluations
- Download data from released evaluations to XLSX
- Make changes to an evaluation marked Ready for Conference
- Reports
- School Reports
- LEA/District Reports
- Teacher Reports
- Comparison Reports
- Human Capital Retention Dashboard
- Roster Verification (RV)
- Getting Started
- All Actions by Role
- All Actions for Teachers
- All Actions for School Administrators or Roster Approvers
- Manage teachers' access to RV
- Assign other school users the Roster Approver permission
- View a teacher's rosters
- Take control of a teacher's rosters
- Add and remove rosters for a teacher
- Copy a roster
- Apply a percentage of instructional time to every student on a roster
- Batch print overclaimed and underclaimed students
- Remove students from a roster
- Add a student to a roster
- Return a teacher's rosters to the teacher
- Approve a teacher's rosters
- Submit your school's rosters to the district
- All Actions for district admin or district roster approvers
- Assign other LEA/district users the Roster Approver permission
- Take control of a school's rosters
- View a teacher's rosters
- View the history of a teacher's rosters
- Edit a teacher's rosters
- Add and remove rosters for a teacher
- Copy a roster
- Apply a percentage of instructional time to every student on a roster
- Batch print overclaimed and underclaimed students
- Return a school's rosters to the school
- Approve rosters that you have verified
- Submit your district's rosters
- Understanding the RV Pages
- Viewing the History of Actions on Rosters
- Additional Resources
- Admin Help
- General Help
Diagnostic
Technical Details
Placing Students into Achievement Groups
Students are placed into three groups based on their achievement.
When the graph shows five groups, group 1 includes students whose achievement falls into the lowest 20% of the reference group distribution, group 2 includes students whose achievement falls between the 20th and 40th percentiles, and so on.
When the graph shows three groups, group 1 includes students whose achievement falls into the lowest third of the reference group distribution, group 2 includes students whose achievement falls in the middle third, and group 3 includes students whose achievement is in the top third.
For all assessments, more than a single test score is used to place students into groups. Using more data minimizes the effect of measurement error and helps ensure that students are placed into achievement groups appropriately.
It's important to keep in mind that the following students are not included in these reports:
- Students who have been identified on the state assessment booklet as EL 1st Year
- Students who are administered the Pennsylvania Alternate Assessment (PASA)
Students are divided into three equal groups based on where their achievement in the selected subject falls in the reference group distribution.
For all assessments, more than a single test score is used to place students into groups. Using more data minimizes the effect of measurement error and helps ensure that students are placed into achievement groups appropriately.
The model used to analyze the selected assessment determines how we define achievement. See assessments analyzed with the growth standard methodology and assessments analyzed with the predictive methodology.
Model | How Achievement is Defined |
Growth Standard Methodology | The average of a student's two most recent scores in the selected subject. For example, in a report for sixth-grade math, students are placed into achievement groups based on the average of their fifth-grade and sixth-grade math scores. If a student's fifth-grade math score is missing, that student is not placed into an achievement group on this report. |
Predictive Methodology | Where the student's predicted score falls in the reference group distribution for that grade and subject or Keystone content area. Students who lack sufficient data do not have predicted scores and therefore are not included in achievement groups on this report. For eighth-grade science and for Keystone content areas, students must have three prior assessment scores across grades and subjects to have predicted scores. Fourth-grade science uses only the two prior scores from third-grade math and ELA. |
- Students Not Used in Analysis are not included in the Custom Diagnostic report, and not used in the Value-Added analysis. This can happen for several reasons. For example, it happens when students don't have sufficient past test scores, or their current-year scores had to be excluded for business reasons.
- Students Not Used in Report are not included in the Diagnostic report but are used in the Value-Added analysis. This happens when students who have sufficient past test scores for the analysis either don't have scores from the previous year, or their previous year's scores had to be excluded for business reasons.
Generating Growth Measures
Once students are placed into groups, a simple growth measure is generated for each group. A group must have at least five students for a growth measure to be generated.
For all assessments, a growth measure of 0.0 represents meeting the growth standard.
It's important to remember that these simple growth measures do not come from the robust analytic models that generate the growth measures on the value-added reports. As a result, you'll want to exercise some caution when interpreting the data. Specifically, focus on the relative pattern of growth across groups rather than rely too heavily on any one value. Because the growth measures are estimates, consider their associated standard errors as you interpret the values.
The model used to analyze the selected assessment determines how we generate growth measures. See assessments analyzed with the growth standard methodology and assessments analyzed with the predictive methodology.
Model | How Growth Measures are Generated |
Growth Standard Methodology | The growth measure is the difference between the group's most recent average score in this subject and its prior average score in the same subject. The growth measures for these assessments are expressed in reference group NCEs. Differences in student counts within each year can cause slight shifts in the NCEs for prior years. For more information see Why Students' NCEs Might Change. |
Predictive Methodology | The growth measure is the difference between the group's average score and their average predicted score in the selected subject or course. The growth measures for these assessments are expressed in scale score points. |