- Using PVAAS for a Purpose
- Key Concepts
- PEERS
- About PEERS
- Understanding the PEERS pages
- Evaluation List
- Evaluation Summary
- Evaluation Forms
- Add Educator
- Add Evaluator
- Manage Access
- Add a school-level Educator to PEERS
- Add a district-level Educator to PEERS
- Add the Evaluator permission to a user's account
- Remove the Evaluator permission from a district user's account
- Add the Evaluator or Administrative Evaluator permission to a district user's account
- Remove the Administrative Evaluator permission from a district user's account
- Remove an Educator from PEERS
- Restore a removed Educator
- Assign an Educator to a district-level Evaluator
- Assign an Educator to an Evaluator
- Unassign an Educator from an Evaluator
- Assign an Educator to a school
- Unassign an Educator from a school
- Link a PVAAS account to an Educator
- Working with Evaluations
- Switch between Educator and Evaluator
- View an evaluation
- Use filters to display only certain evaluations
- Print the Summary section of an evaluation
- Understanding evaluation statuses
- Determine whether other evaluators have access to an evaluation
- Lock or unlock an evaluation
- Save your changes
- Mark an evaluation as Ready for Conference
- Release one or more evaluations
- Download data from released evaluations to XLSX
- Make changes to an evaluation marked Ready for Conference
- Reports
- School Reports
- LEA/District Reports
- Teacher Reports
- Student Reports
- Comparison Reports
- Human Capital Retention Dashboard
- Roster Verification (RV)
- Getting Started
- All Actions by Role
- All Actions for Teachers
- All Actions for School Administrators or Roster Approvers
- Manage teachers' access to RV
- Assign other school users the Roster Approver permission
- View a teacher's rosters
- Take control of a teacher's rosters
- Add and remove rosters for a teacher
- Copy a roster
- Apply a percentage of instructional time to every student on a roster
- Batch print overclaimed and underclaimed students
- Remove students from a roster
- Add a student to a roster
- Return a teacher's rosters to the teacher
- Approve a teacher's rosters
- Submit your school's rosters to the district
- All Actions for district admin or district roster approvers
- Assign other LEA/district users the Roster Approver permission
- Take control of a school's rosters
- View a teacher's rosters
- View the history of a teacher's rosters
- Edit a teacher's rosters
- Add and remove rosters for a teacher
- Copy a roster
- Apply a percentage of instructional time to every student on a roster
- Batch print overclaimed and underclaimed students
- Return a school's rosters to the school
- Approve rosters that you have verified
- Submit your district's rosters
- Understanding the RV Pages
- Viewing the History of Actions on Rosters
- Additional Resources
- Admin Help
- General Help
Growth Measures and Standard Errors
Keep in mind that all growth measures reported in PVAAS are estimates. They are reliable estimates generated from a large amount of data using robust, research-based statistical modeling. But they are estimates, nonetheless.
Because growth measures are estimates, PVAAS reports display the standard error associated with each growth measure. Error is expected with any measurement. The standard error is a mathematical expression of certainty in an estimated value. The standard error on the PVAAS reports can be used to establish a confidence band around the growth measure. This confidence band helps us determine whether the increase or decrease in student achievement is statistically significant. In other words, it indicates how strong the evidence is that the group's achievement level increased, decreased, or remained about the same.
More Information about Standard Error
The standard error is specific to each growth measure because it expresses the certainty around that one estimate. The size of the standard error will vary depending on the quantity and quality of the data that was used to generate the growth measure. A smaller standard error indicates more certainty, or confidence, in the growth measure. A number of factors affect the size of the standard error, including:
- The number of students included in the analyses
- The number of assessment scores each student has, across grades and subjects
- Which specific scores are missing from the students' testing histories
To understand why these factors affect the size of the standard error, let's consider a few examples.
Imagine two groups of students. Each group could represent the students in a particular teacher's class, all the students in a grade and subject at a particular school, or even all of the students in an LEA/district. Both groups have the same growth measure in math. The growth measure for the first group is based on 11 students, while the growth measure for the second group is based on 60 students. Because this second group's measure is based on many more students, and therefore, much more data, we would be more confident in that estimate of Growth. As a result, the standard error would be smaller for the second group.
Now let's consider two other groups of students with the same growth measure in math. This time both groups are equal in size, each having 60 students. However, in the first group all students have complete testing histories for the past five years. In contrast, quite a few students in the second group are missing prior test scores. The missing test scores create more uncertainty, so the growth measure for that group would have a larger standard error.
In our final example, there are again two groups of students, and again, both groups have the same growth measure in math. The groups are equal in size, with each group having 60 students. Both groups of students have five years of test scores, and in both groups there are five students who are missing one prior score. However, in the first group, the five students with missing scores are missing the previous year's math score. In the second group, the five students with missing scores are missing the math score from three years ago. Because we are generating a growth measure for math, the missing math scores from the prior year create more uncertainty than the missing math scores from several years ago. As a result, the group with the missing math scores would have a larger standard error.
The standard error is a critical part of the reporting because it helps to ensure that LEAs/districts, schools, and teachers are not disadvantaged in PVAAS because they serve a small number of students or because they serve students with incomplete testing histories.