Navigate a four-part cycle that will help you develop effective assessments and determine students' needs.
Adapted by G. Faith Little from CPET’s Handbook: Design Your Own ELA Assessments, by Courtney Brown and Dr. Roberta Lenger Kang.
As educators, we assess our students in multiple ways for many purposes: to evaluate what they know and can do based on the skills and content that we teach them, to make instructional decisions, and to reflect on teacher practice. Quizzes, in-class activities, homework, and writing assignments are all opportunities to find out how our students are achieving. By analyzing and interpreting the results of these tasks, we can determine the needs of students and adjust our teaching to help students succeed.
Periodic assessments are low-stake assessments designed to provide timely and detailed information on students’ strengths and weaknesses, as well as their progress over time. The results from the assessments are used by teachers as data to inform instruction. They can also be used to start and deepen communication with parents / caregivers who can support their students’ learning outside of the classroom.
Establishing learning goals
The assessment and rubric development cycle begins by establishing learning goals. Wiggins and McTighe (2005) pose these helpful questions: "What content is worthy of understanding? What enduring understandings are desired?" Addressing these questions will help teachers establish learning goals for the entire year, as well as individual units. Periodic assessments should be designed to measure students’ progress in these learning goals and relate directly to the content of teachers’ instruction, i.e., books, plays, stories, etc. These goals, in ELA, for example, should relate to the reading process, knowledge of literary conventions, making meaning (analysis), and communicating in writing. When teachers develop learning goals together, there is cohesion and focus in how the school supports students’ learning.
In any content area, learning goals specify the crucial understanding teachers expect students to develop and the key skills or competencies students should demonstrate. Learning goals for periodic assessments can be developed by teachers, often working together within a school, and referencing important resources, including state mandated learning standards and performance indicators, schoolwide learning standards, disciplinary criteria, and teachers’ own expectations for student performance.
Articulate learning goals by answering the following:
Developing assessments and rubrics
Formative assessments are those that focus on the students’ performance as they develop important knowledge and skills — they’re meant as benchmarks or milestones along the journey rather than the final destination. Teachers can use the data from these assessments to determine future instruction.
Periodic assessments should be aligned with student performance standards, learning goals, and the curriculum in each unique learning community. While each assessment (Fall, Winter, and Spring) may look different, designers should carefully consider how the assessments measure students’ growth throughout the entire year. These assessments will provide valuable feedback for teachers to improve instruction, align to standards, and measure student growth. These curriculum-embedded assessments have great potential to inform teachers and schools of the students’ progress toward established standards. Curriculum-embedded means an assessment is designed to reflect the actual curriculum being taught in the class. This is in contrast to most standardized tests, which generally do not pertain to the curriculum taught in individual classrooms. The chosen approach often influences the type of data accumulated.
Each assessment maintains the same format and requirements. Many teachers have used similar persuasive writing prompts with the same requirements for each assessment. This approach allows for consistency and assessment routines.
Since each assessment is very similar, teachers are looking to see students’ scores increase on each assessment. Longitudinal score results are reliable because the assessments are similar and the same rubric is used for each one. Teachers can identify areas where scores do not increase as areas in need of additional instruction.
The assessments become progressively more difficult, to reflect the growing knowledge and abilities of students. The assessments are designed to reflect the latest content unit as well as yearlong learning goals. This approach allows for flexibility and takes a snapshot of student learning.
Since each assessment gets progressively harder, maintaining scores indicates students are learning and growing. Score results directly reflect the most recent instruction and show what information students did not fully grasp. Teachers can make determinations about what should be reviewed or revisited from a new approach.
The assessments are inextricably connected to the classroom units and allow for students to express learning in multiple forms. Project-based assessments are designed to reflect the latest content knowledge through authentic (real-world) projects.
Since many of these authentic assessments include a publication or performance, an early draft of the project may be used as the assessment because it represents the students’ own work without additional assistance. Teachers see scores as reflective of “live knowledge” and immediately use the data to identify important lesson topics and share feedback with students so they can revise.
The assessments are designed to measure growth in specific areas, based on information learned by analyzing the data and student work. Responsive assessments attend to the needs presented by the students and they inform specific areas where instruction should be elaborated.
Since these assessments are designed as a response to the data, the support structures in each assessment may increase or decrease, depending on the time of year. Areas where students are scoring poorly may indicate that increased support is necessary for the next assessment. When analyzing the data, teachers are primarily interested in how students are performing in these specified areas where they’ve made changes in an effort to increase student performance.
The rubric, a scoring guide that uses descriptors for every performance level, is an essential component to any assessment. While rubrics may come in many shapes and sizes, the most effective rubrics are those that encapsulate the focus dimensions of the assessment and clearly describe the strengths of the performance at every level. Rubrics are most effective when they are assessment-specific because they are created for the precise task and requirements of the assessment, unlike all-purpose rubrics, which are often broad or ambiguous to account for a wide variety of assessments.
The following ten criteria (adapted by Dr. Roberta Lenger Kang from the work of Dennie Palmer Wolf of the Rethinking Accountability Initiative at the Annenberg Institute for School Reform at Brown University) mark the foundations for creating an effective rubric that reliably evaluates student work:
Frequent pitfalls of rubric design
Too much / too little information
Too much information and the rubric bleeds on to too many pages, the expectations are barely read, and it can be difficult to use. Too little information leaves too many qualities of the assessment undefined, and therefore, difficult to score. Consider aiming for 3-6 dimensions and performance levels with 3-5 consistent qualifying statements for each.
Vague, ambiguous, confusing or contradictory language
Consider using descriptions that clearly explain what the student did.
Professional or exclusive language vs. inclusive language
Be careful not to exclude students from understanding the rubric by using ultra-sophisticated or academic language. The most popular buzzwords don’t always make the best rubric descriptors. Use student-friendly language that capitalizes on words and phrases commonly used in the classroom.
Expectations for assessment misaligned with the rubric
Be sure to align the rubric with the learning goals and the assessment. It’s crucial that designers pay close attention to the assessment requirements and the rubric language because if something is not on the rubric, it cannot be used as a factor for evaluation.
Scoring is inconsistent with performance levels
If scores seem inconsistent with the performance levels (for example, a student earned “Proficient” but only earns a score of 2 or 65%) it may be necessary to revise how the scores were assigned to each level. Consider testing the rubric scoring system to be sure numeric scores or percentages are accurate to their performance level.
All assessments must be scored in order to produce the data used to inform instruction. When scoring assessments, we recommend that schools use a team approach, which may include both teachers from the specific content area and teachers from other disciplines. This promotes school-wide collaboration, builds community, and supports reading and writing across the disciplines.
Team members should prepare to score by taking part in a “norming” process, reviewing and discussing the task to clarify what the students are asked to do and what teachers expect to see in their work and become familiar with the rubric. Next, it is useful to have a “norming discussion”: everybody reads and discusses several samples of student work from the assessment task, each teacher scores the work individually, and then group members share scores and discuss how they used the rubric to score the student work samples using evidence from the papers to justify their scoring. Once teachers feel comfortable with using the rubric, teachers should score the students’ work from the assessment. We suggest building in a “reliability check” process to make sure that the students’ work is being scored fairly and consistently so that the data will be reliable.
Periodic assessments are most useful when scored as soon after the administration of the assessment as possible, so the data is relevant and timely. The longer the gap between students’ taking the assessment and the data report, the less useful in informing instruction the data will seem to teachers and administrators.
SAMPLE: An approach to collaborative scoring
Two important common principles: (5-10 minutes)
Clarifying the task (20 min)
Norming (30 - 45 min)
Distribution of copies of assessment #1
Scoring (duration depends on task content / length)
Score comparison (duration depends on how scores match up)
Mediation (if necessary)
All assessments with final scores should be returned to facilitators.
Woohoo! We did it!
Data comes in many forms and is used every day by teachers to help plan instruction and adjust our teaching. Quizzes, essays, homework, standardized tests, and attendance records are all data. Teachers’ observations of students at work are also an important form of data. Students’ responses and behavior are data. No one form of data will give a complete picture of a student’s achievement. There is no shortage of usable data; however, it is how we collect and analyze the data wisely to inform teaching and learning that matters. This step in the cycle specifies a systematic process, one that allows us to look closely and analyze the data.
A careful analysis of the data will yield important information about our students’ strengths and weaknesses. Based on this information, schools and teachers can develop or revise learning goals, and teachers can plan specific instruction for a class and/or individual students to best address students’ needs. We suggest that data discussions be carried out among specific content area teachers and with colleagues from other disciplines. This will allow teachers across the curriculum to identify and address students’ needs — for example, organizing an argument in writing, using evidence to support their thesis, articulating rationale for a solution to a problem or a hypothesis, etc.
Data is any form of information collected together for reference and analysis. Another way to understand data is as evidence of desired results. In the case of periodic assessment data, scores from student work provide data for understanding how students are performing related to specific learning goals. This is where curriculum-embedded assessments can be most powerful. You actually have to anticipate the kinds of data that will be useful as part of developing a unit of study. The data may be used to answer questions about whole class performance and individual student performance. The data reports will also help teachers to adjust instructional strategies to address students’ needs — e.g., what might need to be retaught or taught differently?
Continuing the cycle
Assessing students is most valuable as part of a cycle that begins with establishing student learning goals and involves developing curriculum-embedded assessments and rubrics, administering the assessments, scoring them, and analyzing the data they produce to inform instruction. Keep the cycle going from baseline to end of year assessments!