The PGES Student Growth Component

This article was submitted by Cora Wigger, a graduate student in public policy at Vanderbilt University’s Peabody College of Education.

Examining TPGES’s Student Growth Component

The 2014-2015 school year was the first where all schools in the state fully implemented the Kentucky Professional Growth and Effectiveness System (PGES), and beginning in 2015-2016 all schools and districts will be required by the state to use the results of PGES evaluations for decision-making for professional development and retention plans. Now is a particularly critical time for the state to be evaluating both the structure and rollout of PGES in order to make any final changes before stakes are officially attached to the system.

PGES Overview: http://kyedreport.com/?p=148

If you follow teacher evaluation systems in other states or in the national conversation, you’ve probably come across the terms “Value Added Models” (VAM) (a calculation of student test scores that attributes to the teacher the growth of a student beyond what would have been predicted) and “Student Learning Objectives” (SLO’s) (individualized learning goals developed and assessed for each of a teacher’s students). Kentucky uses both of these in the Student Growth portion of their teacher evaluation system (TPGES), but refrains from using the often politicized terms.  Smart, since not all VAMs and SLOs are created equal.

Kentucky’s Student Growth Goals, a take on SLO’s, are a strong pedagogical tool, and Kentucky’s push to use this strategy statewide is ambitious and forward-thinking, because they’re not easy to implement and monitor. Available research generally supports the idea that SLO’s have a positive effect on student learning, and the individualized nature of goal-development promotes teacher buy-in for the evaluation system. However, there is little evidence that SLO’s are a valid or reliable tool for measuring teacher effectiveness for evaluation (see Morgan & Lacireno, 2013). While the process of creating and using these student growth goals may be beneficial for both teacher practice and student learning, their use in TPGES for determining a teacher effectiveness score and subsequent teacher development and retention needs may not end up being a responsible or accurate measurement approach.

The second component of a teacher’s student growth score uses a student’s change in test scores as compared to their academic peers to determine the teacher’s contribution to that student’s academic growth. Kentucky’s approach here maximizes teacher buy-in by limiting the application of test score data to teachers who actually taught the students being tested in a given year (as compared to some systems that hold all teachers in a school accountable for students’ test scores, even those teaching untested subjects). Student Growth Percentiles (SGP) are determined for each tested student in at least their second year of consecutive testing by comparing each students’ current year scores to other students state-wide with the same scores on the previous year’s test. SGP is determined based on what percentile each student falls in according to their current test score compared to other students with the same test score from the year before. A teacher’s Median Student Growth Percentile (MSGP) is determined from the median of all of that teacher’s students’ SGPs. As complicated an explanation as that may be, compared to other Value Added Models, Kentucky’s is extremely simple. Some VAMs, for instance, take student background or teacher experience into account. And by basing the final SGP score on percentiles instead of raw scores, the TPGES model necessitates that there will always be students with a low SGP and a high SGP, even if all students do better (or worse) than would have been predicted.

The SGP approach also limits the years and subjects for which an assessment-based growth score can be calculated, because it requires consecutive years of test data in the same subject, greatly reducing the number of teachers able to receive a score from the pool of teachers who teach tested subjects. It also averages scores over three years, when available, which is statistically great, but for new teachers it makes each year hold more weight than for those with more experience. Overall, the use of SGP’s for student growth measurement is a potentially invalid and unreliable statistical tool that doesn’t utilize much of the available test data for determining teacher contribution to student growth.

However, it may not much matter. Kentucky allows districts to determine the weight that MSGP scores receive, theoretically allowing this score to make up as little as 5% of the overall student growth score for a teacher. So while it may not be as statistically sound or reliable as is ideal, districts have the ability to nearly completely leave it out of teachers’ final effectiveness scores. However, this then places all of the weight (for untested teachers) and nearly all of the weight (for tested teachers in districts that place little importance on MSGP) on student growth goals, which I have already demonstrated may be a flawed source for teacher evaluation.

The theory behind having an effective teacher evaluation model is that you will improve students’ education by improving the teachers – either by changing which teachers are in the work force or by identifying areas of weakness and tailoring professional development around those areas and for those teachers. But I will not be surprised if we come to see that TPGES isn’t so great at the identification of strong and weak teachers and areas of practice, given its not so strong measurement tools. However, if done well, the use of student growth goals accompanying TPGES may directly improve the education our students receive by giving teachers a powerful tool for offering individualized education to every student. And ultimately, that’s the purpose of any teacher evaluation system. I would be wary, however, to overuse TPGES in more high-stakes decisions that impact teachers, like pay scales or dismissals, as the system may not be up to snuff to be able to give us that kind of reliable information.

Morgan, C., Lacireno-Paquet, N. (2013) Overview of Student Learning Objectives (SLOs): Review of the Literature. Regional Education Laboratory at EDC.

For more on Student Learning Objectives and how they may impact teacher performance and student outcomes, see an analysis of Denver’s ProComp System.

For more on some of the challenges of VAM alluded to in Wigger’s analysis, see Some Inconvenient Facts About VAM.

For a look at some of the challenges posed by Tennessee’s relatively sophisticated VAM model, see The Worst Teachers

 

For more on education politics and policy in Kentucky, follow @KYEdReport

PGES Skepticism

Gary Houchens expressed skepticism about the ability of Kentucky’s new teacher evaluation system (PGES) to effectively differentiate teacher performance back in 2013.  And he has noted since that he remains skeptical.

Houchens cites research that suggests that not much changes in terms of measurable teacher performance no matter the evaluation tool. More specifically, he notes that despite spending significant dollars on new systems, many states still weren’t seeing much differentiation among teachers on evaluations.

He writes:

Last Spring I wrote about a New York Times article exploring the results of new teacher evaluations in multiple states, including Florida, Michigan, Tennessee, Connecticut, and Washington, DC.  After investing millions of dollars and thousands of hours in new evaluation systems designed to better distinguish levels of teacher performance, these states found that principals were still rating more than 90 percent of all teachers as effective or highly effective. Only tiny percentages of teachers were identified as “ineffective” or “developing.”

It would seem these efforts were a monumental waste of time and money with only a handful of possible explanations for the results.

Houchens then goes on to note that leadership at the principal level is what makes an impact on teaching practice, regardless of the evaluation model used.

He notes:

Furthermore, Murphy and colleagues identify four larger categories of principal behaviors that make a difference in teaching quality:

…providing actionable feedback to teachers…developing communities of practice in which teachers share goals, work, and responsibility for student outcomes…offering abundant support for the work of teachers..and creating systems in which teachers have the opportunity to routinely develop and refine their skills.

None of these principal activities must rely on the teacher evaluation system for their effectiveness.  In fact, these activities are most likely high-leverage behaviors even under the old, clunky teacher evaluation system.  Perhaps we could save all this time and money we are currently investing in PGES and focus, instead, on leadership behaviors that really make a difference.

I want to zoom in on the actionable feedback piece of the research cited by Houchens. To me, that is the biggest shortcoming in most evaluation systems. That is, even if principals found areas for improvement for a specific teacher, directing them to ways to improve practice can at times prove difficult. Content-specific professional development may not be readily available, for example. Access to mentors and coaches is often limited, if it exists at all.

And, as Houchens notes, time constraints placed on principals may prevent them from providing the coaching/guidance teachers most need.

One of the biggest complainst I hear from teachers, regardless of the evaluation model used, is that professional development is not connected in any way to what’s written on the evaluation.

A teacher rated “meets expectations” (a 3 on Tennessee’s 1-5 teacher rating system), likely has earned 1s or 2s in some categories of the rubric. Yet the attendant professional development is simply not offered or available. That’s just one example of actionable feedback.  So, teacher X now knows he is struggling in a few areas, but doesn’t know quite what to do to improve.

It could be something as simple as release time to observe other teachers who are strong where that teacher is weak. So, while mentors and coaches are helpful, the solution doesn’t necessarily have to carry a high cost.

Moreover, what is the cost of NOT investing in teachers to help them improve practice? First, it’s disrespectful to teachers as professionals. Professional educators want to improve their practice. An evaluation system that identifies areas for improvement but fails to provide actionable feedback on how to improve is insulting and demoralizing. Second, it’s not fair to students. School leaders know that a certain teacher needs help in specific areas, but that help is not provided. So, students continue to miss out on the best possible instruction.

How we treat teachers says a lot about how much we truly value our students. Treating them like professionals may carry costs in terms of both time and money. But those costs are worth it if we truly want every child to have access to a great education.

And, as Houchens notes, maybe instead of spending on fancy new evaluation systems with tremendous potential, we should spend on leadership development and training as well as provision of the feedback mechanisms that will truly improve instructional practice.

 

For more on education politics and policy in Kentucky, follow @KYEdReport

An Overview of PGES

In the 2014-15 school year, every Kentucky teacher will be evaluated using the Professional Growth and Effectiveness System (PGES). But, what is PGES and what does it mean for teachers?

This policy brief is designed to provide an overview of PGES — what it means, where it came from, and where teacher evaluation is headed in Kentucky.

The new evaluation system is a component of the “Next-Generation Professionals” pillar of Kentucky’s Unbridled Learning reform, passed in 2009 as Senate Bill 1. The system was field tested in limited districts from 2010 – 2013, and in the 2013 – 2014 school year, all districts
statewide piloted PGES. While all teachers will be measured by PGES in 2014 –2015, districts will not be required to use PGES evaluations for personnel decisions until the 2015 – 2016 school year.

PGES has been phased-in over time and will continue to be refined throughout the process.

PGES Timeline:

Phase 1: 2010-11
25 districts participated in a Field Test of PGES.

Phase 2: 2011-13
55 districts participated in a Field Test of PGES.

Phase 3: 2013-14
All districts participated in a Pilot of PGES (a minimum of 10 percent of schools per district).

Phase 4: 2014-15
Statewide implementation of PGES. Districts choose whether or not to use PGES for personnel decisions, but are not required to by the State.

Phase 5: 2015-beyond
Statewide implementation of PGES for personnel decisions. The system moves into the Unbridled Learning accountability model.

What’s in PGES?

PGES includes five domains for evaluating teachers: planning and preparation,
classroom environment, instruction, professional responsibility, and student growth.

  • The educator’s overall performance rating is determined by “professional practice” and “student growth” ratings, producing an ultimate evaluation of exemplary, accomplished, developing, or ineffective.
  • Four domains – planning and preparation, classroom environment, instruction, and professional responsibility – contribute to a professional practice rating of exemplary, accomplished, developing, or ineffective.
  • The local and state student growth metrics contribute to a student growth rating of high, expected, or low.

 

Table 1: PGES Structure and Sources of Evidence for Each Domain

Overall Performance Rating

(Exemplary, Accomplished,   Developing, Ineffective)

Professional Practice Rating

(Exemplary, Accomplished,   Developing, Ineffective)

Student   Growth Rating
(High, Expected, Low)

Planning   and Preparation

Classroom Environment

Instruction

Professional Responsibility

Student Growth

1) Pre and Post Conferences

2) Professional Growth Plans

3) Self Reflection

4) Lesson Plans

1) Observation

2) Student Voice Survey

3) Professional Growth Plans

4) Self Reflection

1) Observation

2) Student Voice Survey

3) Professional Growth Plans

4) Self Reflection

1) Pre and Post Conferences

2) Professional Growth Plans

3) Self Reflection

4) Lesson Plans

1) Local student growth goals

2) State student growth percentiles

Source: Kentucky Department of Education

What do the domains mean?

Student Growth

All Kentucky teachers will have “rigorous, locally-determined student growth goals, developed collaboratively between the teacher and evaluator.” Additionally, 4th – 8th grade English and math teachers will have a state growth measure based on student growth percentiles (change in an individual student’s performance over time) on state K-PREP tests.

Observations

Each district in Kentucky decides how many and what kinds of administrator observations will occur during a teacher’s summative cycle. These observations will be aligned with the Kentucky Framework for Teaching. Administrator observations are part of an educator’s overall professional practice rating. Teachers may also receive formative feedback from peer observations to help improve their practice.

Student Voice Survey

Third through 12th grade students provide formative feedback to teachers through an online survey, reporting on their classroom experiences including teaching practices and learning conditions. Student voice surveys are included in an educator’s overall professional practice rating.

Self Reflection and Professional Growth

Teachers self reflect on their instructional planning, lesson implementation, content knowledge, beliefs, and dispositions for the purpose of self-improvement. The goal of self-reflection is to improve teaching and learning through ongoing thinking on how professional practices impact student and teacher learning.

After doing a self-evaluation, teachers will decide on a professional growth goal, around which they will develop an action plan. To narrow their goal, teachers will answer three questions:

  1. What do I want to change about my instruction that will effectively impact student learning?
  2. What personal learning is necessary to make the change?
  3. What are the measures of success?

 

Carol Franks, an effectiveness coach with the Kentucky Department of Education, explained that the first question “really zeroes in about instruction that is going to impact students, the second identifies what teachers need to do to meet the goal, and the third is about what evidence teachers can use to show they have grown professionally.” The professional growth goal also incorporates students’ needs, feedback from observations, and supervisor input.

How will PGES be used?

The teacher’s PGES scores determine the next steps, including an improvement plan and the process for follow-up evaluation. The table below demonstrates:

Table 2: Improvement Plans Based on Teacher Student Growth
and Professional Practice Ratings

Student Growth Rating Professional Practice Rating Improvement Plan
Low Ineffective An up-to-12-month improvement plan   with goals determined by an evaluator, focus on low-performance areas and   another summative evaluation at the end of the plan
Low Developing A one-year directed plan with goals and activities   determined by the evaluator with input from the teacher, goals that focus on   the low performance/outcome areas, a formative review annually and a   summative review at the end of the plan
Low Accomplished or Exemplary A two-year self-directed plan with goals set by the   teacher with evaluator input, one goal must focus on the low outcome area and   an annual formative review
Expected or High Ineffective A one-year   directed plan with goals determined by the evaluator and activities   determined by the evaluator with input from the teacher, goals that focus on   the low performance/outcome areas, a formative review annually, and a   summative review at the end of the plan
Expected or High Developing A two-year self-directed plan with goals and activities   set by the teacher with evaluator input, goals must focus on the low   performance/outcome area and an annual formative review
Expected or High Exemplary A three-year   self-directed plan with goals set by the teacher with evaluator approval,   activities are directed by the teacher and implemented with colleagues, an   annual formative review and a summative review at the end of the third year

Source: KentuckyTeacher.org

By the 2015 – 2016 school year, the new evaluation system is intended to inform all personnel decision-making by schools, districts and the state, such as support for professional learning, additional compensation, raises, tenure, certification, and release decisions. The State will make approval of local evaluation systems contingent on integration of evaluations into personnel
decisions.

What’s next?

This is the first year every teacher will experience PGES. Through field tests, the process has been revised and refined. The next hurdle will be the development and implementation of improvement plans. Then, the mandate that districts use the information to inform personnel decisions in the 2015-16 year takes effect. District adaptation to that mandate could fundamentally change the way teachers are compensated and may inform professional development, hiring practices, and dismissal procedures.

*The research in this report was compiled by Colleen Maleski, a graduate student in education policy. Most of the information was compiled from the Kentucky Department of Education and KentuckyTeacher.org.

PGES and the New Teacher

Todd County Central High School Science Teacher Pennye Rogers, a 2014-15 Hope Street Group Fellow, talks about the new PGES evaluation system and what it means for the beginning teacher.

Here are some highlights of what she has to say over at the Prichard Blog:

 I have heard conversations that stated: “PGES is not good for new teachers.” The explanation was that new teachers don’t have the skills necessary to promote student growth, nor are they competent in the strategies to teach the content. But, it is my understanding that the peer observer is to encourage the observed teacher to reflect upon his/her teaching practices and guide them toward improvement. It is important to note that a single peer observation may not be enough in this situation. However, a new teacher would most likely have a mentor already through the KY Teacher Internship Program. I find it disturbing that new teachers who have the potential to become great teachers may be let go at an increased rate and blamed on PGES because he/she cannot score high enough on the evaluation scale! New teachers simply don’t have the experience and confidence necessary to excel in all areas evaluated.

Here, Rogers is recommending that administrators take note of the potential impact of PGES on a new teacher. Additionally, a new teacher’s KTIP mentor should assist that teacher in advocating for his/her needs as it relates to the evaluation.

The KTIP program is a fairly intense mentorship of first-year teachers that provides support, feedback, and guidance in the critical early phase of teaching. Combining effective mentorship with the new evaluation model is an important element in the future success of PGES.

For more on Kentucky education politics and policy, follow @KYEdReport

On the New Evaluation System for Teachers

Lindsey Childers offers her thoughts on Kentucky’s Professional Growth and Effectiveness System (PGES) for teachers and administrators.

In short, she says that a well-thought out development process and a measured roll-out will strengthen the evaluation instrument when it is fully implemented in the 2014-15 school year.