By DR. ROBERTA LENGER KANG
As we near the end of the school year, our focus naturally begins to shift towards understanding our school’s data and making plans for next year, particularly as we sit down to develop the state mandated Comprehensive Education Plan (CEP). This is where ESSA policies begin to hit close to home.
We’ve already explored the major differences between NCLB and ESSA, and the ways NYS is breaking down accountability into different components. We know that ESSA primarily measures effectiveness by comparing schools with one another, and that the number of accountability measures has doubled under ESSA. Now, the most important thing to understand is how these new components will be measured, and how those measurements determine accountability.
Statewide rankings and long-term goals
NYSED has developed two rating methods: statewide rankings, and annual and long-term goals.
Statewide rankings measure the following components:
Ranking performance across the state is exactly what it sounds like. In three of the seven components, the state will rank order the performance of every school and then assign a Performance Level on a scale of 1-4. Performance across the state will be ranked without making any considerations for enrollment demographics. Large or small, vocational or specialized, urban, suburban or rural — all schools across the state will be compared and rank ordered by performance.
The following components are measured by annual and long-term goals:
In addition to ranking school performance, the NYSED has developed a system for statewide growth based on short- and long-term goals.
Here’s how it works:
For each of the four components list above, NYSED has identified a statewide end goal to represent the ultimate expectations for schools in New York.
Then, using data from 2016 and 2017, the state identified baseline data points for every school. The difference between the Baseline and the End Goal is called the gap.
In order to close the gap between current school performance and the End Goal, the state has identified long-term goals, which require schools to close 20% of the gap within 5 years.
The state divides the Long-Term Goals evenly across the five years to determine annual goals called Measures of Interim Progress or MIPs.
Measures of Interim Progress
MIPs are preset, annual goals that are established in five year sets as they build toward the Long Term Goal. There are two types of MIPs: an individual school MIP aligned to the school’s baseline data, and a statewide MIP, aligned to the state’s baseline data. MIPs require schools to increase 4% every year, starting from their baseline. MIPs do not fluctuate based on year-to-year performance.
At the end of each year, every school will be measured by their ability to meet their school’s individual MIP, as well as the State MIP. If a school is performing below the statewide average, their school MIP will be their lower goal. If a school is performing above the statewide average, the school MIP will be their higher goal.
Schools will be assigned a Performance Level 1-4 based on their ability to meet or exceed their MIPs.
In 2018, 95% of New York schools were designated to be in good standing for the 2018-19 school year. Performance data at the end of the 2018-19 school year will be used to determine the accountability status under ESSA’s system for the first time.
Every ESSA component, whether measured by statewide ranking or MIPs, will receive a rating of Level 1 through 4, and then each of those levels will be used to determine the accountability status of the school. The two accountability pathways are Comprehensive School Improvement (CSI) and Targeted School Improvement (TSI). While the pathway to accountability is the same for CSI and TSI, CSI is reserved for schools who are struggling to meet the needs of the “all students” group, while TSI can be triggered by subgroups (either by ethnicity, students with disabilities, English Language Learners, or economically disadvantaged).
Following the old saying, “three strikes and you’re out,” these are scenarios that trigger CSI or TSI:
So what’s next? With a solid understanding of the methods for evaluation, you can begin looking closely at your school’s data, your goals, and where you might be in danger of triggering TSI or CSI. With advanced planning and support, you can use this data to inform your CEP goals, target areas for improvement for next year, and highlight areas for increased professional development.
If you’d like help breaking down your ESSA data, contact us for a free consultation, or consider attending one of our upcoming Exploring ESSA: Unpacking State Accountability sessions at Teachers College.
By DR. ROBERTA LENGER KANG
In New York City, one of the most challenging areas for teacher evaluation is Danielson 3b: Questioning and Discussion. This domain evaluates a teacher’s ability to facilitate instruction in such a way that allows students to ask and answer higher-order questions, and initiate and maintain peer-to-peer discussions. It also expects that virtually all students are engaged in the discussion.
The use of the word engage is particularly interesting. Most often, we interpret engagement to mean participation and when we think of participation, we most often interpret this to mean talking. As a result, we spend a lot of time focused on how we can encourage every student to speak during a class discussion — and that’s a good thing. But is speaking the only way that students can engage?
While talking is an essential component of the discussion process, so is listening. If everyone is racing to speak, are students actually listening to each other, or are they quietly composing their comments in their mind and waiting for their turn? If their primary focus is on when they can speak, are they truly engaged? Are they learning anything from the dialogue?
Let’s broaden the definition of engagement to include both speaking and listening. Notice how our questions shift: how can we recognize active listening? How can we encourage active listening? How can we communicate our expectations around active listening to our students and the administrators who are completing the evaluations?
By DR. ROBERTA LENGER KANG
When my son was five years old, his Kindergarten teacher assigned the class 20 minutes of reading for homework every night. We would sit on the couch together, and he would read to me. We didn’t get but two or three pages into the book when his mind would begin to wander, he’d start making silly jokes, or pretend to get really sleepy. I tried to be persistent. I’d prop him up on my lap, and encourage that we point at each word on the page together, sounding them out one by one. He would just sit silently.
I asked him what was wrong, and after some time in silence, he mustered the courage to whisper, “There’s a word on the page that’s bothering me.” That’s what he said -- bothering. It was as if the word was out on the playground taunting him to jump off the swing, or in the cafeteria ready to steal his lunch money. The word was bothering him.
This was the first time it occured to me that reading is an emotional experience.
The second time it occured to me was when I presented a workshop on reading complex texts at Teachers College. The workshop was designed for a group of middle school teachers from New York City who were embarking on a literacy initiative at their school. As part of my workshop, I wanted to explore what makes a text complex, and why. I passed out seven different excerpts from seven different fields (legal, medical, literary, mathematical, computer science, crafting, and sports) and asked the teachers to read the texts and rank them according to easiest to most difficult. While everyone was engaged in reading, I saw one teacher pick up one of the texts, promptly put it back down again, and then push the paper all the way to the edge of the table where it flew off and fell to the floor.
In debriefing the experience, I asked the teacher to share with the group the strong response he had to this text. He said, “The moment I looked at it, I knew I wasn’t going to be able to understand it, and it made me feel sick to my stomach. I just wanted to get the paper as far away from me as possible.”
Reading is an emotional experience
We’ve all had this happen to us from time to time. For some, it’s when we’re reading an old English poem, or maybe it’s reading through a mathematical proof, or reading the instructions for filling out paperwork for the IRS. The big a-ha moment for us as educators is that the same reaction we might have when it comes to reading complex texts, may be the same reaction our students are having on a daily basis when we assign texts in our content areas.
Here’s something else I learned from this workshop: there isn’t one type of text that’s easy, and another type of text that’s difficult. I’ve conducted this same workshop with hundreds of educators and every time, I find that different people find complexity in different texts. Our experiences with text complexity are typically based on four criteria:
It’s these four criteria that inform the emotions we feel while reading. The more criteria we’re able to match to the text, the easier it seems to us. The easier the text is to read, the better we feel about ourselves. The better we feel, the more our confidence grows and our interest in reading increases. The fewer criteria we’re able to match to the text, the more difficult it seems to us, the worse we feel about ourselves. Our confidence decreases and our interest in reading decreases.
Helping students find meaning in texts
It’s possible we’re inadvertently creating spaces in our classrooms where students become less interested, less confident, and less comfortable with reading because of these emotional interactions with “difficult” texts. But there are some simple solutions that we can implement if we carefully consider the four criteria for making meaning:
These four steps are not always easy, but if we’re planning with these essentials in mind, we have the power to rapidly transform resistant or reluctant readers in any content area.