The PGES Student Growth Component

This article was submitted by Cora Wigger, a graduate student in public policy at Vanderbilt University’s Peabody College of Education.

Examining TPGES’s Student Growth Component

The 2014-2015 school year was the first where all schools in the state fully implemented the Kentucky Professional Growth and Effectiveness System (PGES), and beginning in 2015-2016 all schools and districts will be required by the state to use the results of PGES evaluations for decision-making for professional development and retention plans. Now is a particularly critical time for the state to be evaluating both the structure and rollout of PGES in order to make any final changes before stakes are officially attached to the system.

PGES Overview: http://kyedreport.com/?p=148

If you follow teacher evaluation systems in other states or in the national conversation, you’ve probably come across the terms “Value Added Models” (VAM) (a calculation of student test scores that attributes to the teacher the growth of a student beyond what would have been predicted) and “Student Learning Objectives” (SLO’s) (individualized learning goals developed and assessed for each of a teacher’s students). Kentucky uses both of these in the Student Growth portion of their teacher evaluation system (TPGES), but refrains from using the often politicized terms.  Smart, since not all VAMs and SLOs are created equal.

Kentucky’s Student Growth Goals, a take on SLO’s, are a strong pedagogical tool, and Kentucky’s push to use this strategy statewide is ambitious and forward-thinking, because they’re not easy to implement and monitor. Available research generally supports the idea that SLO’s have a positive effect on student learning, and the individualized nature of goal-development promotes teacher buy-in for the evaluation system. However, there is little evidence that SLO’s are a valid or reliable tool for measuring teacher effectiveness for evaluation (see Morgan & Lacireno, 2013). While the process of creating and using these student growth goals may be beneficial for both teacher practice and student learning, their use in TPGES for determining a teacher effectiveness score and subsequent teacher development and retention needs may not end up being a responsible or accurate measurement approach.

The second component of a teacher’s student growth score uses a student’s change in test scores as compared to their academic peers to determine the teacher’s contribution to that student’s academic growth. Kentucky’s approach here maximizes teacher buy-in by limiting the application of test score data to teachers who actually taught the students being tested in a given year (as compared to some systems that hold all teachers in a school accountable for students’ test scores, even those teaching untested subjects). Student Growth Percentiles (SGP) are determined for each tested student in at least their second year of consecutive testing by comparing each students’ current year scores to other students state-wide with the same scores on the previous year’s test. SGP is determined based on what percentile each student falls in according to their current test score compared to other students with the same test score from the year before. A teacher’s Median Student Growth Percentile (MSGP) is determined from the median of all of that teacher’s students’ SGPs. As complicated an explanation as that may be, compared to other Value Added Models, Kentucky’s is extremely simple. Some VAMs, for instance, take student background or teacher experience into account. And by basing the final SGP score on percentiles instead of raw scores, the TPGES model necessitates that there will always be students with a low SGP and a high SGP, even if all students do better (or worse) than would have been predicted.

The SGP approach also limits the years and subjects for which an assessment-based growth score can be calculated, because it requires consecutive years of test data in the same subject, greatly reducing the number of teachers able to receive a score from the pool of teachers who teach tested subjects. It also averages scores over three years, when available, which is statistically great, but for new teachers it makes each year hold more weight than for those with more experience. Overall, the use of SGP’s for student growth measurement is a potentially invalid and unreliable statistical tool that doesn’t utilize much of the available test data for determining teacher contribution to student growth.

However, it may not much matter. Kentucky allows districts to determine the weight that MSGP scores receive, theoretically allowing this score to make up as little as 5% of the overall student growth score for a teacher. So while it may not be as statistically sound or reliable as is ideal, districts have the ability to nearly completely leave it out of teachers’ final effectiveness scores. However, this then places all of the weight (for untested teachers) and nearly all of the weight (for tested teachers in districts that place little importance on MSGP) on student growth goals, which I have already demonstrated may be a flawed source for teacher evaluation.

The theory behind having an effective teacher evaluation model is that you will improve students’ education by improving the teachers – either by changing which teachers are in the work force or by identifying areas of weakness and tailoring professional development around those areas and for those teachers. But I will not be surprised if we come to see that TPGES isn’t so great at the identification of strong and weak teachers and areas of practice, given its not so strong measurement tools. However, if done well, the use of student growth goals accompanying TPGES may directly improve the education our students receive by giving teachers a powerful tool for offering individualized education to every student. And ultimately, that’s the purpose of any teacher evaluation system. I would be wary, however, to overuse TPGES in more high-stakes decisions that impact teachers, like pay scales or dismissals, as the system may not be up to snuff to be able to give us that kind of reliable information.

Morgan, C., Lacireno-Paquet, N. (2013) Overview of Student Learning Objectives (SLOs): Review of the Literature. Regional Education Laboratory at EDC.

For more on Student Learning Objectives and how they may impact teacher performance and student outcomes, see an analysis of Denver’s ProComp System.

For more on some of the challenges of VAM alluded to in Wigger’s analysis, see Some Inconvenient Facts About VAM.

For a look at some of the challenges posed by Tennessee’s relatively sophisticated VAM model, see The Worst Teachers

 

For more on education politics and policy in Kentucky, follow @KYEdReport

Leave a Reply

Your email address will not be published. Required fields are marked *

You may use these HTML tags and attributes: <a href="" title=""> <abbr title=""> <acronym title=""> <b> <blockquote cite=""> <cite> <code> <del datetime=""> <em> <i> <q cite=""> <strike> <strong>