The PGES Student Growth Component

This article was submitted by Cora Wigger, a graduate student in public policy at Vanderbilt University’s Peabody College of Education.

Examining TPGES’s Student Growth Component

The 2014-2015 school year was the first where all schools in the state fully implemented the Kentucky Professional Growth and Effectiveness System (PGES), and beginning in 2015-2016 all schools and districts will be required by the state to use the results of PGES evaluations for decision-making for professional development and retention plans. Now is a particularly critical time for the state to be evaluating both the structure and rollout of PGES in order to make any final changes before stakes are officially attached to the system.

PGES Overview: http://kyedreport.com/?p=148

If you follow teacher evaluation systems in other states or in the national conversation, you’ve probably come across the terms “Value Added Models” (VAM) (a calculation of student test scores that attributes to the teacher the growth of a student beyond what would have been predicted) and “Student Learning Objectives” (SLO’s) (individualized learning goals developed and assessed for each of a teacher’s students). Kentucky uses both of these in the Student Growth portion of their teacher evaluation system (TPGES), but refrains from using the often politicized terms.  Smart, since not all VAMs and SLOs are created equal.

Kentucky’s Student Growth Goals, a take on SLO’s, are a strong pedagogical tool, and Kentucky’s push to use this strategy statewide is ambitious and forward-thinking, because they’re not easy to implement and monitor. Available research generally supports the idea that SLO’s have a positive effect on student learning, and the individualized nature of goal-development promotes teacher buy-in for the evaluation system. However, there is little evidence that SLO’s are a valid or reliable tool for measuring teacher effectiveness for evaluation (see Morgan & Lacireno, 2013). While the process of creating and using these student growth goals may be beneficial for both teacher practice and student learning, their use in TPGES for determining a teacher effectiveness score and subsequent teacher development and retention needs may not end up being a responsible or accurate measurement approach.

The second component of a teacher’s student growth score uses a student’s change in test scores as compared to their academic peers to determine the teacher’s contribution to that student’s academic growth. Kentucky’s approach here maximizes teacher buy-in by limiting the application of test score data to teachers who actually taught the students being tested in a given year (as compared to some systems that hold all teachers in a school accountable for students’ test scores, even those teaching untested subjects). Student Growth Percentiles (SGP) are determined for each tested student in at least their second year of consecutive testing by comparing each students’ current year scores to other students state-wide with the same scores on the previous year’s test. SGP is determined based on what percentile each student falls in according to their current test score compared to other students with the same test score from the year before. A teacher’s Median Student Growth Percentile (MSGP) is determined from the median of all of that teacher’s students’ SGPs. As complicated an explanation as that may be, compared to other Value Added Models, Kentucky’s is extremely simple. Some VAMs, for instance, take student background or teacher experience into account. And by basing the final SGP score on percentiles instead of raw scores, the TPGES model necessitates that there will always be students with a low SGP and a high SGP, even if all students do better (or worse) than would have been predicted.

The SGP approach also limits the years and subjects for which an assessment-based growth score can be calculated, because it requires consecutive years of test data in the same subject, greatly reducing the number of teachers able to receive a score from the pool of teachers who teach tested subjects. It also averages scores over three years, when available, which is statistically great, but for new teachers it makes each year hold more weight than for those with more experience. Overall, the use of SGP’s for student growth measurement is a potentially invalid and unreliable statistical tool that doesn’t utilize much of the available test data for determining teacher contribution to student growth.

However, it may not much matter. Kentucky allows districts to determine the weight that MSGP scores receive, theoretically allowing this score to make up as little as 5% of the overall student growth score for a teacher. So while it may not be as statistically sound or reliable as is ideal, districts have the ability to nearly completely leave it out of teachers’ final effectiveness scores. However, this then places all of the weight (for untested teachers) and nearly all of the weight (for tested teachers in districts that place little importance on MSGP) on student growth goals, which I have already demonstrated may be a flawed source for teacher evaluation.

The theory behind having an effective teacher evaluation model is that you will improve students’ education by improving the teachers – either by changing which teachers are in the work force or by identifying areas of weakness and tailoring professional development around those areas and for those teachers. But I will not be surprised if we come to see that TPGES isn’t so great at the identification of strong and weak teachers and areas of practice, given its not so strong measurement tools. However, if done well, the use of student growth goals accompanying TPGES may directly improve the education our students receive by giving teachers a powerful tool for offering individualized education to every student. And ultimately, that’s the purpose of any teacher evaluation system. I would be wary, however, to overuse TPGES in more high-stakes decisions that impact teachers, like pay scales or dismissals, as the system may not be up to snuff to be able to give us that kind of reliable information.

Morgan, C., Lacireno-Paquet, N. (2013) Overview of Student Learning Objectives (SLOs): Review of the Literature. Regional Education Laboratory at EDC.

For more on Student Learning Objectives and how they may impact teacher performance and student outcomes, see an analysis of Denver’s ProComp System.

For more on some of the challenges of VAM alluded to in Wigger’s analysis, see Some Inconvenient Facts About VAM.

For a look at some of the challenges posed by Tennessee’s relatively sophisticated VAM model, see The Worst Teachers

 

For more on education politics and policy in Kentucky, follow @KYEdReport

Why is Kentucky Losing New Teachers?

The Prichard Blog poses this question in light of some startling data from the Kentucky Board of Education:

For every 100 teachers who were new hires of the 2009-10 school year:

  • 18 were out of Kentucky teaching by the next year
  • 12 more were gone by the year after that
  • 7  were teaching in a different district by their second year
  • 7  were in the same district, but at a different school
  • 56 were still at their original schools

That means after two years, 30% of teachers who start teaching in Kentucky no longer teach in Kentucky. Perhaps they leave teaching altogether or perhaps they just move out of state. It also shows that after two years, only 1 out of 2 new teachers hired are still teaching at the same school.

As Prichard notes, this raises some important questions. Certainly, this type of turnover is both expensive and challenging for school districts.

But, what can be done?

One possible solution is a new teacher mentoring program. Yes, Kentucky has KTIP, but perhaps a program that goes deeper and does more to support new teachers is in order. Investing in early career teaching matters:

It is absolutely imperative that early career teachers receive adequate support and assistance so they develop into excellent teachers.  It’s also critical that those teachers are encouraged to stay in the field.  High teacher turnover costs districts (and taxpayers) money and deprives students of the valuable benefits of strong, stable teachers.  One proven method of retaining new teachers that also results in improved student learning is early career mentoring.  Research at the New Teacher Center suggests that placing a trained mentor with a new teacher in the first two years of teaching both improves teacher retention and shows a positive impact on student learning.

Additionally, adopting a more comprehensive support system — perhaps within the PGES framework, could help. Combining the new evaluation system with a Peer Assistance and Review program could also bolster the support new teachers receive in their early career development:

This Harvard Guide looks at seven PAR programs and discusses their impact. The bottom line is that the programs are generally well-received by both teachers and administrators and demonstrate a level of effectiveness at both preparing new teachers and improving veteran teachers.

Here are a few key takeaways:

Districts with PAR programs say that, although the program can be expensive, it has many important benefits. PAR’s mentoring component helps beginning teachers succeed and, thus, increases retention. PAR also makes it possible to help ineffective tenured teachers improve or to dismiss them without undue delay and cost because of the program’s clear assessment process and the labor-management collaboration that underpins it. This process of selective retention can lead to a stronger teaching force and promote an organizational culture focused on sound teaching practice. Union leaders say that the program professionalizes teaching by making teachers responsible for mentoring and evaluating their peers. With its specialized roles for Consulting Teachers (CTs), PAR also has the potential to differentiate the work and career opportunities of teachers.

When nearly one out of every three new teachers hired in Kentucky leaves the profession after two years, something needs to be done. Certainly, no one wants to keep people in a profession for which they are not well-suited. But high turnover is not desirable for districts, for students, and for taxpayers. Certainly, many of those who chose teaching sincerely want to do the job and have the ability to do it well.

Kentucky would do well to find a way to better support early career teachers and improve their development as professionals.

For more on education policy and politics in Kentucky, follow @KYEdReport

 

Professional Development: Accepted and Expected

This article was submitted by Hope Street Group Fellows Kip Hottman and Angela Baker. Baker teaches English/Language Arts and Journalism in Berea Community Schools. Read her full bio. Hottman is a Spanish teacher at Oldham County High School. Read his full bio.

Kentucky Education Report continues to seek submissions from teachers who wish to comment on education policy in Kentucky.

This year Kentucky joined many states throughout the U.S. in implementing a more comprehensive teacher evaluation program. Kentucky teachers have been piloting the new Professional Growth and Effectiveness System (PGES) for the last two years, but this year full implementation is occurring, with full accountability being postponed until the 2015-2016 school year.

Across the nation many teachers are taking part in initiatives that integrate and embed professional learning within the teacher evaluation. While professional development has been part of teachers’ ongoing training throughout schools for years, school administrators and local decision making councils are currently looking at how to improve individual teacher’s skills. PGES will allow individual teachers to tailor their professional learning to their needs rather than enduring school-wide professional development that most likely does not match their individual areas of improvement. At the heart of the decision making about a teacher’s effectiveness is data; data about his or her students (such as summative test scores and daily, formative academic gains), classroom observations and teacher reflection. With information from multiple measures, teachers, through collaboration with the administrator, are able to create student-centered goals and increasingly intentional plans to improve their effectiveness.

In October of 2013 Secretary of Education, Arne Duncan, visited Williamsburg, Kentucky to encourage and acknowledge the state’s efforts within Early Childhood Development. At the town hall convening, Secretary Duncan was asked to provide a specific example of a teacher evaluation system in the United States that is successful. He immediately responded with Montgomery County, Maryland, and their use of a program called Peer Assistance and Review (PAR – http://www.gse.harvard.edu/~ngt/par/)

The purpose of the PAR program is to assist all teachers to meet standards for proficient teaching. It is a program that has been instituted to truly help teachers be as successful as possible, continue to learn and continue to grow as an educator. The system was instituted in the early 2000’s and uses multiple measures to determine a teacher’s professional development (PD) needs. The multiple measures are as follows:

  • Formal and informal observations by school administration or a consulting teacher
  • Student achievement data
  • Non-evaluative observations by a staff development teacher, reading specialist, math specialist or math content coach
  • Student learning objective data
  • Peer walk-throughs
  • Formative assessment data and marking period data

When Assistant Principal Greg Mullenholz of Maryvale Elementary School in Montgomery County, Maryland was asked about strengths and weaknesses of the PAR program, he said, “The evaluation has an outcome that is rooted in Professional Development. Meaning, the observation of the teacher is used to analyze the effectiveness of their practice. A problem that could arise if the observation isn’t solid because the goal will be misaligned to the actual need of the teacher. The support structure also has to be in place so the Professional Development will be available once a goal is defined.”

In the past, growth was viewed as a common thread amongst departments in schools, and most teachers focused on the same goal as their peers. The PAR program is groundbreaking because it is teacher-centered as they have the opportunity to create their own professional growth goal. The teacher is held accountable for his or her goal and provides evidence of change in student achievement through their adopted changes in practice.

Mr. Mullenholz also discussed his personal opinion of PAR and its effect on growing teachers professionally through collaboration: “Since its implementation over a decade ago, PAR has been a strong model. I love that it was collaboratively developed and that the school system and the union are both architects. The “peer” part is critical as the evaluation or observation must have an expectation for improvement in the teacher’s practice, or there is no set-up for success.”

While Montgomery County School district implemented an evaluation system with an eye toward teacher development, others took this one step further and created incentives for improved performance. One example of this is the Vaughn Next Century Learning Center in San Fernando, California.

The Vaughn Next Century Learning Center has a history of offering high quality professional development integrated with teacher evaluation for performance pay over an interval of several years. They also use the PAR program and, like other teacher evaluation systems, professional development needs are determined by a combination of test scores and areas of need identified through observations by both lead teachers and administrators. As an independent charter school, the curriculum committee looks at the needs of the entire school and plans professional development based on numerous local factors.

Nicole Mohr, teacher and Curriculum committee Chair to the Board of Directors at the Vaughn Center stated, “It is an ever growing, ever changing process. Teachers who are on the performance assistance and review team meet regularly, several times a year and each summer to discuss how the program is meeting the needs of the school.” Most schools meet regularly to desegregate data from state tests, other assessments and even non-cognitive data to make plans to improve the school.

Teachers receive pay incentives based on numerous areas: their skills/knowledge base (Designing Coherent Instruction, Managing Classroom Procedures, Managing Student Behavior, Engaging Students in Learning, Reflecting on Teaching, and Showing Professionalism) evaluated during observations, contingency base (student attendance), outcomes base (graduation rate and Average Percentage Increased), expertise base (department chair, coach, mentor, tutor, etc) and measurable student growth.

Ms. Mohr cautioned that the downside of incentives or merit pay is “[teachers] may look for ways to prove [they] are meeting the requirements rather than looking for ways to improve [instructional practices]”. Authentically excellent teachers usually do have the evidence to prove they are meeting expectations, which shows the overall importance and benefit of accountability.” While accountability may mean merit pay for some, for most schools evaluation is used to make decisions about retention.

Mella Baxter, English and reading teacher in Flagler County Schools in Florida is at a school that does not use PAR but is integrating professional development with teacher evaluation. Ms. Baxter stated, “[Professional Development] is not differentiated by individual teacher needs, but rather each Professional Learning Community (PLC) meeting focuses on how to get highly effective in one of the indicators on the evaluation tool. Then the rest of the PLC teachers work together to create lessons, assessment, etc. based on student data designed to get students to the level they need to be for teachers to get a highly effective rating.”

Aligning the professional development to the evaluation tool that is then linked to best practices seems to be a simple and effective idea. Ms. Baxter, who is also a Hope Street Group National Teacher Fellow, is designing a space on the Virtual Engagement Platform for Hope Street Group that will list indicators for Florida’s teacher evaluation tool and link each one to resources that will help teachers achieve a highly effective rating in that category. Her plan is to allow teachers to “further individually tailor their PD.” Once completed it will allow features such as uploading videos of teachers as exemplars or to attain feedback.

Teachers are more than capable of designing evaluative tools that encompass the complexity of the teaching profession. The most effective teachers are life-long learners. Professional development ought not to be a matter of compliance; it ought to be a tool for satisfying a teacher’s quest for daily improvement of practice. Being treated like a professional is a first step toward redesigning a career ladder that will keep the best teachers in the classroom and proud to be there helping American students.

More on Career Pathways for Teachers

More on Peer Assistance and Review (PAR)

For more on education policy and politics in Kentucky, follow @KYEdReport

 

PGES Skepticism

Gary Houchens expressed skepticism about the ability of Kentucky’s new teacher evaluation system (PGES) to effectively differentiate teacher performance back in 2013.  And he has noted since that he remains skeptical.

Houchens cites research that suggests that not much changes in terms of measurable teacher performance no matter the evaluation tool. More specifically, he notes that despite spending significant dollars on new systems, many states still weren’t seeing much differentiation among teachers on evaluations.

He writes:

Last Spring I wrote about a New York Times article exploring the results of new teacher evaluations in multiple states, including Florida, Michigan, Tennessee, Connecticut, and Washington, DC.  After investing millions of dollars and thousands of hours in new evaluation systems designed to better distinguish levels of teacher performance, these states found that principals were still rating more than 90 percent of all teachers as effective or highly effective. Only tiny percentages of teachers were identified as “ineffective” or “developing.”

It would seem these efforts were a monumental waste of time and money with only a handful of possible explanations for the results.

Houchens then goes on to note that leadership at the principal level is what makes an impact on teaching practice, regardless of the evaluation model used.

He notes:

Furthermore, Murphy and colleagues identify four larger categories of principal behaviors that make a difference in teaching quality:

…providing actionable feedback to teachers…developing communities of practice in which teachers share goals, work, and responsibility for student outcomes…offering abundant support for the work of teachers..and creating systems in which teachers have the opportunity to routinely develop and refine their skills.

None of these principal activities must rely on the teacher evaluation system for their effectiveness.  In fact, these activities are most likely high-leverage behaviors even under the old, clunky teacher evaluation system.  Perhaps we could save all this time and money we are currently investing in PGES and focus, instead, on leadership behaviors that really make a difference.

I want to zoom in on the actionable feedback piece of the research cited by Houchens. To me, that is the biggest shortcoming in most evaluation systems. That is, even if principals found areas for improvement for a specific teacher, directing them to ways to improve practice can at times prove difficult. Content-specific professional development may not be readily available, for example. Access to mentors and coaches is often limited, if it exists at all.

And, as Houchens notes, time constraints placed on principals may prevent them from providing the coaching/guidance teachers most need.

One of the biggest complainst I hear from teachers, regardless of the evaluation model used, is that professional development is not connected in any way to what’s written on the evaluation.

A teacher rated “meets expectations” (a 3 on Tennessee’s 1-5 teacher rating system), likely has earned 1s or 2s in some categories of the rubric. Yet the attendant professional development is simply not offered or available. That’s just one example of actionable feedback.  So, teacher X now knows he is struggling in a few areas, but doesn’t know quite what to do to improve.

It could be something as simple as release time to observe other teachers who are strong where that teacher is weak. So, while mentors and coaches are helpful, the solution doesn’t necessarily have to carry a high cost.

Moreover, what is the cost of NOT investing in teachers to help them improve practice? First, it’s disrespectful to teachers as professionals. Professional educators want to improve their practice. An evaluation system that identifies areas for improvement but fails to provide actionable feedback on how to improve is insulting and demoralizing. Second, it’s not fair to students. School leaders know that a certain teacher needs help in specific areas, but that help is not provided. So, students continue to miss out on the best possible instruction.

How we treat teachers says a lot about how much we truly value our students. Treating them like professionals may carry costs in terms of both time and money. But those costs are worth it if we truly want every child to have access to a great education.

And, as Houchens notes, maybe instead of spending on fancy new evaluation systems with tremendous potential, we should spend on leadership development and training as well as provision of the feedback mechanisms that will truly improve instructional practice.

 

For more on education politics and policy in Kentucky, follow @KYEdReport

An Overview of PGES

In the 2014-15 school year, every Kentucky teacher will be evaluated using the Professional Growth and Effectiveness System (PGES). But, what is PGES and what does it mean for teachers?

This policy brief is designed to provide an overview of PGES — what it means, where it came from, and where teacher evaluation is headed in Kentucky.

The new evaluation system is a component of the “Next-Generation Professionals” pillar of Kentucky’s Unbridled Learning reform, passed in 2009 as Senate Bill 1. The system was field tested in limited districts from 2010 – 2013, and in the 2013 – 2014 school year, all districts
statewide piloted PGES. While all teachers will be measured by PGES in 2014 –2015, districts will not be required to use PGES evaluations for personnel decisions until the 2015 – 2016 school year.

PGES has been phased-in over time and will continue to be refined throughout the process.

PGES Timeline:

Phase 1: 2010-11
25 districts participated in a Field Test of PGES.

Phase 2: 2011-13
55 districts participated in a Field Test of PGES.

Phase 3: 2013-14
All districts participated in a Pilot of PGES (a minimum of 10 percent of schools per district).

Phase 4: 2014-15
Statewide implementation of PGES. Districts choose whether or not to use PGES for personnel decisions, but are not required to by the State.

Phase 5: 2015-beyond
Statewide implementation of PGES for personnel decisions. The system moves into the Unbridled Learning accountability model.

What’s in PGES?

PGES includes five domains for evaluating teachers: planning and preparation,
classroom environment, instruction, professional responsibility, and student growth.

  • The educator’s overall performance rating is determined by “professional practice” and “student growth” ratings, producing an ultimate evaluation of exemplary, accomplished, developing, or ineffective.
  • Four domains – planning and preparation, classroom environment, instruction, and professional responsibility – contribute to a professional practice rating of exemplary, accomplished, developing, or ineffective.
  • The local and state student growth metrics contribute to a student growth rating of high, expected, or low.

 

Table 1: PGES Structure and Sources of Evidence for Each Domain

Overall Performance Rating

(Exemplary, Accomplished,   Developing, Ineffective)

Professional Practice Rating

(Exemplary, Accomplished,   Developing, Ineffective)

Student   Growth Rating
(High, Expected, Low)

Planning   and Preparation

Classroom Environment

Instruction

Professional Responsibility

Student Growth

1) Pre and Post Conferences

2) Professional Growth Plans

3) Self Reflection

4) Lesson Plans

1) Observation

2) Student Voice Survey

3) Professional Growth Plans

4) Self Reflection

1) Observation

2) Student Voice Survey

3) Professional Growth Plans

4) Self Reflection

1) Pre and Post Conferences

2) Professional Growth Plans

3) Self Reflection

4) Lesson Plans

1) Local student growth goals

2) State student growth percentiles

Source: Kentucky Department of Education

What do the domains mean?

Student Growth

All Kentucky teachers will have “rigorous, locally-determined student growth goals, developed collaboratively between the teacher and evaluator.” Additionally, 4th – 8th grade English and math teachers will have a state growth measure based on student growth percentiles (change in an individual student’s performance over time) on state K-PREP tests.

Observations

Each district in Kentucky decides how many and what kinds of administrator observations will occur during a teacher’s summative cycle. These observations will be aligned with the Kentucky Framework for Teaching. Administrator observations are part of an educator’s overall professional practice rating. Teachers may also receive formative feedback from peer observations to help improve their practice.

Student Voice Survey

Third through 12th grade students provide formative feedback to teachers through an online survey, reporting on their classroom experiences including teaching practices and learning conditions. Student voice surveys are included in an educator’s overall professional practice rating.

Self Reflection and Professional Growth

Teachers self reflect on their instructional planning, lesson implementation, content knowledge, beliefs, and dispositions for the purpose of self-improvement. The goal of self-reflection is to improve teaching and learning through ongoing thinking on how professional practices impact student and teacher learning.

After doing a self-evaluation, teachers will decide on a professional growth goal, around which they will develop an action plan. To narrow their goal, teachers will answer three questions:

  1. What do I want to change about my instruction that will effectively impact student learning?
  2. What personal learning is necessary to make the change?
  3. What are the measures of success?

 

Carol Franks, an effectiveness coach with the Kentucky Department of Education, explained that the first question “really zeroes in about instruction that is going to impact students, the second identifies what teachers need to do to meet the goal, and the third is about what evidence teachers can use to show they have grown professionally.” The professional growth goal also incorporates students’ needs, feedback from observations, and supervisor input.

How will PGES be used?

The teacher’s PGES scores determine the next steps, including an improvement plan and the process for follow-up evaluation. The table below demonstrates:

Table 2: Improvement Plans Based on Teacher Student Growth
and Professional Practice Ratings

Student Growth Rating Professional Practice Rating Improvement Plan
Low Ineffective An up-to-12-month improvement plan   with goals determined by an evaluator, focus on low-performance areas and   another summative evaluation at the end of the plan
Low Developing A one-year directed plan with goals and activities   determined by the evaluator with input from the teacher, goals that focus on   the low performance/outcome areas, a formative review annually and a   summative review at the end of the plan
Low Accomplished or Exemplary A two-year self-directed plan with goals set by the   teacher with evaluator input, one goal must focus on the low outcome area and   an annual formative review
Expected or High Ineffective A one-year   directed plan with goals determined by the evaluator and activities   determined by the evaluator with input from the teacher, goals that focus on   the low performance/outcome areas, a formative review annually, and a   summative review at the end of the plan
Expected or High Developing A two-year self-directed plan with goals and activities   set by the teacher with evaluator input, goals must focus on the low   performance/outcome area and an annual formative review
Expected or High Exemplary A three-year   self-directed plan with goals set by the teacher with evaluator approval,   activities are directed by the teacher and implemented with colleagues, an   annual formative review and a summative review at the end of the third year

Source: KentuckyTeacher.org

By the 2015 – 2016 school year, the new evaluation system is intended to inform all personnel decision-making by schools, districts and the state, such as support for professional learning, additional compensation, raises, tenure, certification, and release decisions. The State will make approval of local evaluation systems contingent on integration of evaluations into personnel
decisions.

What’s next?

This is the first year every teacher will experience PGES. Through field tests, the process has been revised and refined. The next hurdle will be the development and implementation of improvement plans. Then, the mandate that districts use the information to inform personnel decisions in the 2015-16 year takes effect. District adaptation to that mandate could fundamentally change the way teachers are compensated and may inform professional development, hiring practices, and dismissal procedures.

*The research in this report was compiled by Colleen Maleski, a graduate student in education policy. Most of the information was compiled from the Kentucky Department of Education and KentuckyTeacher.org.

PGES and the New Teacher

Todd County Central High School Science Teacher Pennye Rogers, a 2014-15 Hope Street Group Fellow, talks about the new PGES evaluation system and what it means for the beginning teacher.

Here are some highlights of what she has to say over at the Prichard Blog:

 I have heard conversations that stated: “PGES is not good for new teachers.” The explanation was that new teachers don’t have the skills necessary to promote student growth, nor are they competent in the strategies to teach the content. But, it is my understanding that the peer observer is to encourage the observed teacher to reflect upon his/her teaching practices and guide them toward improvement. It is important to note that a single peer observation may not be enough in this situation. However, a new teacher would most likely have a mentor already through the KY Teacher Internship Program. I find it disturbing that new teachers who have the potential to become great teachers may be let go at an increased rate and blamed on PGES because he/she cannot score high enough on the evaluation scale! New teachers simply don’t have the experience and confidence necessary to excel in all areas evaluated.

Here, Rogers is recommending that administrators take note of the potential impact of PGES on a new teacher. Additionally, a new teacher’s KTIP mentor should assist that teacher in advocating for his/her needs as it relates to the evaluation.

The KTIP program is a fairly intense mentorship of first-year teachers that provides support, feedback, and guidance in the critical early phase of teaching. Combining effective mentorship with the new evaluation model is an important element in the future success of PGES.

For more on Kentucky education politics and policy, follow @KYEdReport

Is Kentucky Invested in the Future?

Not yet, according to Brad Clark, a Hope Street Group Fellow and teacher in Woodford County.

He writes passionately about the need to properly invest in Kentucky’s future by investing in its students and teachers.

He notes the need for additional resources in schools:

I am not exaggerating when I say that the fourth grade textbook we use to teach Kentucky History in 2014 is the exact same textbook — with a picture of Daniel Boone standing triumphantly on the front cover — that I used when I was in 4th grade in 1991.

And he notes the lack of investment in meaningful professional development for teachers:

 I have even designed and submitted a “Professional Growth Plan” that sits idle in a folder in an office in my building. Yet, I have no way of implementing my strategies for refining my craft. I do not blame my principal for this because he wants every student and teacher in his building to get better at what they do, but he lacks the necessary resources to make that happen.

His central point is that Kentucky is at a crossroads.  While investment in education increased steadily from 1990-2008 following the Kentucky Education Reform Act, that investment has tapered in recent years.  The per pupil funding provided by SEEK has actually declined.

Governor Beshear has proposed a budget that begins to reverse this trend, in some cases at the expense of other areas of state government.

While Kentucky made historic progress that garnered national attention during the years of investment following KERA, those gains are in danger. With new standards for students and new evaluations for teachers, now more than ever, Kentucky must invest in its schools.

Lawmakers would do well to heed the words of Mr. Clark and begin the process of re-investing in Kentucky schools.  They should also view this year’s investment as a starting point and find ways in the future to continue significant investment in Kentucky’s schools and its future.

For more on Kentucky education politics and policy, follow @KYEdReport

On the New Evaluation System for Teachers

Lindsey Childers offers her thoughts on Kentucky’s Professional Growth and Effectiveness System (PGES) for teachers and administrators.

In short, she says that a well-thought out development process and a measured roll-out will strengthen the evaluation instrument when it is fully implemented in the 2014-15 school year.

 

Value-Added Caution

Lots of attention in the discussion around teacher quality focuses on value-added data and the ability to determine a teacher’s effectiveness from a single test score.

More recently, a study by researchers at Harvard has received lots of attention because it purports to indicate that replacing a bad teacher with a good one has significant lifetime impact on student earning potential.

Unfortunately, it seems none of the media fawning over this study know how to use a calculator.

So, I break it down here:

This is the study that keeps getting attention around teacher quality and student earning potential. It was even mentioned in President Obama’s State of the Union back in 2012.  It keeps getting cited as further evidence that we need to fire more teachers to improve student achievement.

Here’s the finding that gets all the attention: A top 5 percent teacher (according to value-added modeling or VAM) can help a classroom of students (28) earn $250,000 more collectively over their lifetime.

Now, a quarter of a million sounds like a lot of money.

But, in their sample, a classroom was 28 students. So, that equates to $8928.57 per child over their lifetime. That’s right, NOT $8928.57 MORE per year, MORE over their whole life.

For more math fun, that’s $297.61 more per year over a thirty year career with a VAM-designated “great” teacher vs. with just an average teacher.

Yep, get your kid into a high value-added teacher’s classroom and they could be living in style, making a whole $300 more per year than their friends who had the misfortune of being in an average teacher’s room.

If we go all the way down to what VAM designates as “ineffective” teaching, you’d likely see that number double, or maybe go a little higher. So, let’s say it doubles plus some. Now, your kid has a low VAM teacher and the neighbor’s kid has a high VAM teacher. What’s that do to his or her life?

Well, it looks like this: The neighbor kid gets a starting job offer of $41,000 and your kid gets a starting offer of $40,000.

Wait, what? You mean VAM does not do anything more than that in terms of predicting teacher effect?

Um, no.

And so perhaps we shouldn’t be using value-added modeling for more than informing teachers about their students and their own performance. Using it as one small tool as they seek to continuously improve practice. One might even mention a VAM score on an evaluation — but one certainly wouldn’t base 35-50% of a teacher’s entire evaluation on such data. In light of these numbers from the Harvard researchers, that seems entirely irresponsible.

Perhaps there’s a lot more to teacher quality and teacher effect than a “value-added” score. Perhaps there’s real value added in the teacher who convinces a struggling kid to just stay in school one more year or the teacher who helps a child with the emotional issues surrounding divorce or abuse or drug use or any number of other challenges students (who are humans, not mere data points) face.

Alas, current trends in “education reform” are pushing us toward more widespread use of value-added data — using it to evaluate teachers and even publishing the results.

I can just hear the conversation now: Your kid got a “2” teacher mine got a “4.” My kid’s gonna make 500 bucks more a year than your kid. Unless, of course, the situation is reversed next year.

Stop the madness. Education is a people business. It’s about teachers (people) putting students (people) first.

I’m glad the researchers released this study. Despite their spurious conclusions, it finally tells us that we can and should focus less on a single value-added score and more on all the inputs at all levels that impact a child’s success in school and life.

As Kentucky considers teacher evaluation “reform,” caution should be used when deciding what (if any) role value-added scores will play in new evaluations.

For more on Kentucky education politics and policy, follow us @KyEdReport

Kentucky Touts ACT Gains

With today’s release of the ACT College and Career Readiness report, the Kentucky Department of Education is touting the fact that the state’s students are making continuous gains in terms of readiness.  The state points to three-year trends that show the number of Kentucky students hitting college/career ready benchmarks steadily (and slowly) heading upward.

The trend data is noteworthy because it establishes that while Kentucky still has work to do, the progress is steady and real.

What’s fascinating is that this progress has been made without any of the trendy reforms oft-touted by today’s education reform crowd.  Kentucky still has no Charter Schools.  There aren’t voucher schemes in the state or any school system.  Kentucky has yet to tie teacher evaluations or licensure to test scores.  In fact, Commissioner Holliday tweeted today that recent polling data on the issue of tying teacher evaluations to test scores was reason to take further pause before considering using scores for the evaluation process.

What that means for Kentucky kids is that they won’t be subject to a barrage of new tests used primarily for creating a number score for a teacher.  Instead, they can expect the same focus on high standards and strong teaching that has been the backbone of Kentucky education policy for more than 20 years now.

What’s even more telling, perhaps, is that in Tennessee, a state that has adopted liberal charter enrollment policy, changed teacher evaluation radically, and recently passed new standards tying teacher licensing to test scores, there was no release today touting similar gains in college and career readiness.

In fact, if you simply look at head-to-head results, Kentucky students test higher (slightly) than Tennessee’s in 4 out of 5 categories.

What’s the difference? Instead of trying every trendy new reform and developing test-dependent policies, Kentucky has focused on rigor and investment.  The comparison of the two states is an important lesson for those in Kentucky who will call for vouchers or charters or score-based teacher evaluations in the 2014 legislative session.

Kentucky should stay the course, continue investing, and move its schools forward.

For more on Kentucky education policy, follow us @KyEdReport