Improve Your Assessments Through Reflection

By | Assessment, eBooks & Whitepapers | No Comments

reflection-and-revision
If you have been following our blog series on the 10 Elements that Compound Student Success, we’ve covered nine out of ten elements that will help you build formative assessments that guide instruction. Click the links below to return to the post about each element.

Element one: Learning Outcomes

Element two: Number of Items

Element three: Item Types

Element four: Arrangement of Items

Element five: Assessment Stretch

Element six: Assessment Rigor

Element seven: Duration of Assessment

Element eight: Clear Directions

Element nine: Deconstructing the Standards

Today we’ll be covering the final element that will help you craft great formative assessments: reflection and revision.

Element Ten: Reflection and Revision

reflection-and-revision
Acknowledging that assessments will never be perfect the first time, no matter how much effort educators put into them, is essential to making them better. They may not be perfect after the second or third versions have been administered, either.

But with practice, assessments will improve and the quality of information gathered will grow. The point is to focus on incremental improvements, because formative assessment is an iterative process, with the opportunity for continual adjustment and improvement.

The reasons to modify assessments before and after each administration are:

1. Educators teach every lesson slightly differently each time.
2. Standards, curricula, and pacing guides change.
3. Educators teach different students each semester or year.
4. Students come to educators with different levels of knowledge and skill.
5. Teaching resources and materials are added and removed from classrooms.

Analyzing assessment results gives teachers insight into not only what the students mastered, but also into how the test items functioned.

The following story is one example of the benefits of analyzing assessments postadministration. It was the end of the nine weeks when the benchmark test was given. On the exam, every first-grade student in a South Carolina classroom failed to correctly answer an item assessing a skill about the proper use of calendars, which the teacher had been teaching every morning for months.

Upon review of the item, the teacher quickly realized that the way the item assessed the skill was not how she was teaching that skill.

Although the students failed to correctly answer the item, the teacher learned some important lessons after reviewing the assessment:

1. The teacher learned how the calendar skill would likely be assessed on the state summative assessment in second grade.
2. She learned a different way to teach that skill to her students.
3. She added an item to the test that assessed the skill in a way that was more closely aligned to how she taught it.

Get all of the in-depth lessons to creating formative assessments that guide instruction here.

 

Deconstructing the Standards

By | Assessment, eBooks & Whitepapers | No Comments

deconstructing-the-standards
If you have been following our blog series on the 10 Elements that Compound Student Success, we’ve covered the first eight elements that will help you build formative assessments that guide instruction. Click the links below to return to the post about each element.

Element one: Learning Outcomes

Element two: Number of Items

Element three: Item Types

Element four: Arrangement of Items

Element five: Assessment Stretch

Element six: Assessment Rigor

Element seven: Duration of Assessment

Element eight: Clear Directions

Now we’re going to take a look at the next element that will help you craft great formative assessments: deconstructing the standards.

Element Nine: Deconstructing the Standards

deconstructing-the-standards

As many educators know, standards are often written as complex, overarching statements that can be interpreted in any number of ways.

For example, three educators teaching the same grade level at the same school may interpret the learning outcomes for a standard in three very different ways. Collaborating with other teachers teaching the same standards is a proven way to mitigate this issue and establish a shared understanding of standards and their associated learning outcomes.¹

When teachers have a deep and shared understanding of the standards being assessed, it drives the selection of items that best meet the learning targets described in the standard. It’s also a way to ensure that the assessment is tightly aligned to the standards and the taught curriculum.

Key Ideas and Details:

deconstructing-the-standards

The standard above provides a good example of words within a standard that are open to teacher interpretation.

“Explicitly detail the observable evidence that students are expected to demonstrate for each standard to facilitate a shared understanding of the essential and observable characteristics of student work.” (Huff, Steinberg & Matts, 2010)

Once educators identify the skills and standards they will assess, it’s time to determine the learning outcomes: in other words, to determine what the learner will know and be able to do by the end of a lesson, unit, course or program.

In order to feel confident that the student has mastered the content, the student will need to be able to demonstrate his or her knowledge in a way that provides the educator with actionable data. This can be accomplished by looking at a sampling of student responses to see if the assessment is extracting the intended information.

deconstructing-the-standards

When teachers create common, formative assessments, they can easily compare data across classrooms as well as develop a more uniform assessment program.

Let’s take the following example. In a departmentalized fifth grade, the math teacher created an assessment based on his understanding of the learning outcomes in a standard. Unfortunately, his interpretation of the learning outcomes were different than the state’s interpretation of those same learning outcomes. Therefore, students were unsuccessful on the state assessment. This is a tragic example of deconstructing standards in isolation. (At least) two heads are always better than one when it comes to deconstructing the standards.

deconstructing-the-standards

“Unpack the standards. Carefully analyze the verbs in the assessments to determine their meaning for assessment and instruction.” (Wiggins & McTighe, 2012).

Ready to read more? Get the complete eBook: The 10 Elements that Compound Student Success here.

The Importance of Giving Clear Directions

By | Assessment, eBooks & Whitepapers | No Comments

clear-directions
If you have been following our blog series on the 10 Elements that Compound Student Success, we’ve covered the first seven elements that will help you build formative assessments that guide instruction. Click the links below to return to the post about each element.

Element one: Learning Outcomes

Element two: Number of Items

Element three: Item Types

Element four: Arrangement of Items

Element five: Assessment Stretch

Element six: Assessment Rigor

Element seven: Duration of Assessment

Now we’re going to take a look at the next element that will help you craft great formative assessments: providing clear directions.

Element Eight: Clear Directions

Element-Eight-Clear-Directions

Directions are an integral component of any assessment.

Directions, like duration of an assessment, can affect the validity of assessment results. If a student is unclear about how to answer questions properly, his or her results may not accurately reflect his or her knowledge.

Here are four guidelines for writing clear directions:

1. Treat directions as an instructional activity by including explanations.
2. Use simple and grade-level-appropriate vocabulary.
3. Provide examples so that the student knows exactly what’s expected.
4. Make sure all students understand the directions before getting started.1

Examples:

Weak directions

Complete this section. Answer each question completely.

Great directions

This section contains only hot spot items. These items require students to select ALL of the correct responses. The items also require students to select at least two answer options. Some items may require students to select all answer options. See a sample hot spot item requiring the selection of three answer options below:

Element-Eight-Item-Example

Ready to read more? Get the complete eBook: The 10 Elements that Compound Student Success here.

How Long Should I Make My Formative Assessment?

By | Assessment, eBooks & Whitepapers | No Comments

duration-of-assessments
If you have been following our blog series on the 10 Elements that Compound Student Success, we’ve covered the first half of the elements that will help you build formative assessments that guide instruction. Click the links below to return to the post about each element.

Element one: Learning Outcomes

Element two: Number of Items

Element three: Item Types

Element four: Arrangement of Items

Element five: Assessment Stretch

Element six: Assessment Rigor

Now we’re going to take a look at the next element that will help you craft great formative assessments: the duration.

Element Seven: Duration of Assessments

duration-of-assessments
Teachers teach and students learn within the minutes and hours allotted for the school day. There is rarely enough time in the school day to teach everything the teacher has planned.

However, assessment is an important part of the teaching and learning cycle. Lack of time is a variable that can negatively impact the reliability and validity of an assessment. For example, students who are required to miss part of their lunch or free time to finish an assessment may feel punished and, therefore, may not do their best work.

Determining the duration of formative assessments

The majority of formative assessments should be able to be started and completed within a class period or allotted time for that subject at the elementary level. The time required for formative assessments varies widely and is dependent upon the type of assessment administered. Be aware that results from tests that span days, or skip days, or are finished during lunch, recess, or after school are far less reliable. Teachers are most likely to get a true picture of what their students know if they can complete an assessment in less than 50 minutes. Younger students in grades K-2 will likely need shorter assessments taking 20-30 minutes or less. A very brief assessment can often provide a much clearer picture of student learning than a very long assessment. duration of assessments

To formatively assess a considerable amount of content, break it up into several short assessments, and provide opportunities for students to show what they know outside of the traditional written format.

Incorporate a variety of formative assessments that involve a high level of student engagement. Assessment does not need to be a dreary and dreaded experience for students or teachers.

The following are a few examples of formative assessments that engage students: debate, role-play, presentations of student work, drawing graphic representations of a concept or idea, building a model, or writing a song. Another idea to consider is using microassessments, such as exit tickets, where students are asked to answer only one or two essential questions immediately following a lesson.

Preparing students for the duration of summative assessments

When preparing students to take longer, summative assessments, different assessment strategies are required. Many students struggle with staying focused and on task when taking longer assessments.

For example, state summative assessments in reading can be especially challenging. To prepare students to be successful on longer assessments, create formative assessments that require students to read several passages of varying lengths. Follow each passage with three to five questions that require students to go back into the text to find evidence to support their answer. This format is a great way for teachers to assess students’ reading comprehension.

How many passages are needed to prepare students for the reading assessment?

Start at the beginning of the year with two or three passages and build up to assessments with six to eight passages to help students develop the endurance they need to do their best work all the way through to the end. Advance planning for these longer assessments is necessary, as they often require extended class periods to complete. Lengthier assessments may require between 90 and 120 minutes. Administering these longer assessments several times throughout the year is typically sufficient to give teachers adequate information about students’ abilities to focus and stay on task throughout a longer assessment.

Ready to read more? Get the complete eBook: The 10 Elements that Compound Student Success here.

Why Rigor Is A Crucial Element for Assessment Success

By | Assessment, eBooks & Whitepapers | No Comments

10-Elements-that-Compound-Student-Succuess-Element-Six

If you have been following our blog series on the 10 Elements that Compound Student Success, we’ve covered the first half of the elements that will help you build formative assessments that guide instruction. Click the links below to return to the post about each element.

Element one: Learning Outcomes

Element two: Number of Items

Element three: Item Types

Element four: Arrangement of Items

Element five: Assessment Stretch

Now we’re going to take a look at the next element that will help you build great formative assessments: rigor.

Element Six: Assessment Rigor

Element-Six-Rigor

Assessment rigor is often confused with assessment difficulty.

Difficulty can present itself in several ways. For example, a test or quiz can assess learning outcomes accurately, but if it contains too many items, there’s a chance that students won’t finish because they gave up. The test was too long.

Difficulty can also present itself when assessment items are at a low level of cognitive complexity but require the student to have memorized large quantities of information.

For example, a relatively easy, recall-level question is “Who is the president of the United States?” A much more difficult question, but still at the recall level, is “List all of the past United States presidents.” The second item is not more rigorous; it is more difficult.

Robyn Jackson and Allison Zmuda’s research on the topic of student engagement points out that students engage with assessments that require them to think and solve problems, create solutions, or present compelling answers to complex questions. They disengage from assessments that contain items that ask simple questions that are the same or similar to each other in form or skill assessed.¹

Common examples of these types of assessments are vocabulary, multiplication, and nonfiction fact assessments. Assessments that give students the opportunity to apply these facts to novel, complex, real-life situations are those that students appreciate and work hard on to demonstrate their knowledge.

When building formative assessments that drive instruction, always select item types that measure the learning outcomes in the most rigorous, engaging way possible.

Research from Jenson, McDaniel, Woodard & Kummer, 2014 sums it all up:

“studying for and answering higher-level items (application, analysis, and evaluation) require students to master information at lower levels (memory and comprehension) as well. Consistent experience engaging with high-level classroom assessments helps students improve their performance on subsequent higher-level and lower-level items. When students are only exposed to lower-level items on classroom assessments, they will not perform as well on items requiring application, analysis, and evaluation. In addition, easy assessments with lower-level items are less likely to foster critical thinking, application of knowledge, or the acquisition and retention of factual information when compared to higher-level assessments.”

Ready to read more? Get the complete eBook: The 10 Elements that Compound Student Success here.

Assessing All Levels of Student Mastery of the Content

By | eBooks & Whitepapers, Formative Assessment | No Comments

student-mastery
If you’ve been following along in our blog series on the 10 Elements That Compound Student Success, we are mid-way through the series. To read the lessons from the first four elements, click the links below.

Element One: Learning Outcomes

Element Two: Number of Items

Element Three: Item Types

Element Four: Arrangement of Items

Element Five: Assessment Stretch

student-mastery

You’re probably wondering … what is “stretch”? When it comes to building formative assessments that will guide instruction, the stretch concept describes the need to incorporate items that assess all levels of student mastery of the content. So how do we get here? Let’s take a look into the classroom…

Most classrooms are not filled with homogeneous groups of students. They contain students who have a wide range of knowledge and skills. Assessments must produce learning data for students across the spectrum, from those who have little or emerging knowledge of the content to those who have demonstrated mastery of the learning outcomes. This concept is called assessment “stretch”.

Tests should provide the teacher with content mastery data for struggling and gifted learners alike. It’s no easy task, but the data is clear: students want to be challenged, and they rise to meet those challenges when given the opportunity. Make assessments rigorous, but be sure to include items that will produce information about every student along the mastery continuum.

Example:
In the example below, Stretch is the distance between Bloom’s Taxonomy Remember: Level 1 and Bloom’s Taxonomy Create: Level 6.

Stretch is the span of cognitive complexity across an assessment.

Note that the assessment it describes has items that span all levels of complexity to keep all students challenged and engaged.

stretch-visual-assessing-student-mastery

Ready to read more? Get the complete eBook: The 10 Elements that Compound Student Success here.

How to Arrange the Items on Your Formative Assessment

By | eBooks & Whitepapers, Formative Assessment | No Comments

How to Arrange the Items on Your Formative Assessment
This is the fourth post in our blog series uncovering the 10 Elements that Compound Student Success, which will teach you how to build formative assessments that guide instruction and produce data that you can use to make effective instructional decisions.

In element one, we focused on the importance of learning outcomes as the first step to building formative assessments. In element two we took a look at the ideal number of items for your assessments and I showed you a trick to getting it just right. Element three covered how many different types of items to include.

Naturally, that leads us to a question many educators ask when building assessments, “How should I arrange my assessment items in order to get the most reliable data?”.

Let’s explore the answer to this question with element four’s “lesson”.

Element Four: Arrangement of Items

How to Arrange the Items on Your Formative Assessment

When building a formative assessment, it’s easy to overlook the arrangement of items as a crucial part of the equation. But organization of assessment items is actually key to getting reliable results.

Research from Waugh & Gronlund, 2013 concludes that “items should be arranged in terms of increasing difficulty. This has a motivational effect on students and will prevent them from becoming overwhelmed by difficult items at the beginning of the assessment”.

Starting out with a couple easy questions helps students ease into the assessment, get their mental cylinders firing, and build their confidence to tackle the medium items and eventually the most challenging ones, which should be located at the end of the assessment. Here’s an example of an assessment that has items arranged with increasing difficulty.

arrangement-of-items

Follow these additional guidelines to ensure that the way the test is laid out doesn’t negatively impact student performance.

Group Together:
1. Items that measure the same learning outcome (knowledge, comprehension, application).

2. All items that require the same directions, making it possible to provide only one set of directions for each item type.

Ready to read more? Get the complete eBook: The 10 Elements that Compound Student Success here.

10-Elements-that-Compound-Student-Succuess-element-three_header

Item Types to Include in Your Assessment

By | Assessment, eBooks & Whitepapers | No Comments

In the first blog of this series, we covered how formative assessment data can guide instruction and improve achievement for all students. Element one was specifically devoted to learning outcomes and specifying early and often in the teaching-learning cycle how students will be expected to demonstrate mastery of content. Element two focuses on the number of items that should be on a formative assessment.

So now we move to element three which is what item types should be included in your assessments.

Element Three: Item Types

Element-Three-Item-Types_post image
Teachers and students are no longer stuck with using only multiple-choice items with four answer options.

Innovative technology solutions allow students the opportunity to perform actions other than simple clicks of a radio button to select the correct answer. Twenty-first-century assessments offer many types of items for teachers to choose from when creating a test. Certain content lends itself to specific item types.

Take a look at some of the most readily available “technology-enhanced” item types:

Enhanced Multiple Choice
Spices up the traditional four-option multiple-choice item by increasing the number of answer options provided.

Hot Spot
Student is required to select all correct answer options. When assessing content that has a group of factors, indicators, or elements, using a hot spot item allows students to select more than one correct answer as the right choice.

Drag and Drop
Student drags answers or questions into appropriate area on a graphic. For example, when assessing a concept that requires students to be able to identify and label parts of a whole, the drag-and-drop item type is most appropriate.

Fill in the Blank
Student types in his or her answer. A fill-in-the-blank item type is a great choice when you want students to generate the answer instead of selecting from a list of possible answers. For example, fill-in-the-blank item types are great for assessing measurement and computation skills in math and science.

Constructed Response
Student is required to create his or her answer in the space provided.

Constructed response items that require students to write their response to a prompt are very valuable in asking students to evaluate, analyze, and create extended written responses.

Constructed Response with Equation Editor
Assessments that provide a variety of item types and allow students to show what they have learned in several formats keep them on task and engaged.

Ready to read more?! Get the complete eBook: The 10 Elements that Compound Student Success here.

This is the Number of Items Your Formative Assessment Should Contain

By | eBooks & Whitepapers, Formative Assessment | No Comments

number-of-items

In the first blog of this series, we covered how formative assessment data can guide instruction and improve achievement for all students. Element one was specifically devoted to learning outcomes and specifying early and often in the teaching-learning cycle how students will be expected to demonstrate mastery of content.

From here, let’s move on to element two out of the ten that compound student success: the optimal number of items that should be on a formative assessment.

Element Two: Number of Items

number-of-items

Teachers often wonder if there is a preferred number of items per standard and assessment that will grant them enough information to determine student mastery.

So, what’s the magic number?

First, it’s helpful to figure out factors such as the volume of information contained in a standard, the number of learning outcomes, and the levels of complexity within the learning outcomes.

Make sure that the number of items gives the teacher enough information to know that the student has mastered the content, without being redundant.

It will be obvious when there are too many questions on the assessment, because the information will start to feel redundant. A multitude of items that essentially ask the same question (in different ways) serve as practice of a skill for the student, rather than indicator of skill mastery. Less is more.

For example, a math test that asks a student to solve 30 double-digit addition problems does not give the teacher more information about the student’s ability to grasp the skill than an assessment with five to 10 double-digit addition items would. It serves as practice of the skill. This type of repetitive-skills assessment disengages the student and is better off reserved for homework assignments to practice improving the skill.

Cognitive complexity

Good assessment items typically have a level of cognitive complexity assigned to them. Cognitive complexity is the level at which a question measures a student’s knowledge. Two of the most commonly used assessment taxonomies are Bloom’s Taxonomy and Webb’s Depth of Knowledge.

The higher the question is on the taxonomy scale, ranging from one to six for Bloom’s and one to four for Webb’s, the more thinking is required to answer the item correctly. Assign levels of Bloom’s Taxonomy and Webb’s Depth of Knowledge to assessment items so that there is a wide range of items, varying in complexity, on the assessment.

How many items should be used for each level?

As a rule of thumb, for learning outcomes that require a low level of cognitive complexity, such as recall or comprehension of a fact or concept, one or two items is sufficient. Why? Because these types of questions are generally basic and require simple answers: for example, asking kindergarten students to list the vowels in the alphabet.

However, when using a single item to assess a skill, it is highly recommended that educators select an item type that reduces the likelihood of a student simply guessing the correct answer. For more complex concepts and skills, two to four items are typically required.

For example, when asking a true-false question, the student has a 50/50 chance of guessing the correct response. It is risky to rely on the results of a single item to assess student mastery of the skill. A better choice would be to ask a multiple-choice or technology-enhanced item type, like the hotspot item type, which specifies that the student must select all correct answers in order to score correctly.

The type of item used to assess a standard also affects the number of items required to determine mastery. For example, a constructed response item may ask a student a multilayered question that requires him or her to demonstrate several learning outcomes in his or her response, whereas a question that requires a student to solve a problem and select all possible correct answers may require an additional question testing the same skill.

Using a Table of Specifications (TOS) is useful as an assessment planning tool because it helps to map out assessments. A TOS allows you to track the level of cognitive complexity for each of your items so that it’s easy to gauge the “stretch” and rigor of the assessment while tracking the standards and skills assessed. Think of a TOS as an “assessment of the assessment.” It helps to ensure that the assessment ultimately serves its intended purpose.

Below is a sample Table of Specifications used at Staunton River Middle School in Moneta, Virginia.

number-of-items-TOS

Ready to read more?! Get the complete eBook: The 10 Elements that Compound Student Success here.

Interactive Achievement Joins the PowerSchool Family

By | Corporate | No Comments

acquisition-of-interactive-achievement

PowerSchool Group LLC (“PowerSchool”) announced the acquisition of Interactive Achievement (“IA”), an award-winning provider of standards-based instructional improvement systems for school districts that create positive change in student education.

Founded in 2006 by childhood friends, Jon Hagmaier and Matt Muller, Interactive Achievement provides a set of powerful intuitive assessment tools with unrivaled reporting and data analysis capabilities. Acquiring IA enables PowerSchool to expand its core Student Information System into the classroom, using innovative tools for directly assisting in improving student outcomes, teacher effectiveness, and administrator reporting abilities in the digital classroom.

PowerSchool is the #1 leading provider of K-12 technology solutions used by more than 40 million users and over 15 million students in 70+ countries. We power school operations for over 6,000 school districts enabling secure, compliant best-in- class processes and insights into school data. Additionally, PowerSchool provides industry-leading enrollment solutions for large and small public, charter and independent school districts. This acquisition enhances PowerSchool’s solutions for teachers, providing innovative digital classroom capabilities and enabling a single user experience for managing attendance, grading, assignments, assessments and analytics. With IA, PowerSchool will deliver new opportunities for customers to measure achievement, utilize rich content, and improve student learning outcomes in the K-12 educational community.

“PowerSchool has the largest K-12 education community with over 6,200 school districts,” said Hardeep Gulati, CEO of PowerSchool. “Our customers are looking to adopt digital classroom tools that empower teachers to easily create personalized and data-driven assessments that directly and measurably improve student performance. By combining formative assessment with PowerTeacher Gradebook, we are providing a comprehensive tool for teachers to manage all critical aspects of their classroom activities. With our best-in-class solutions and embedded analytics, educators can now spot areas of need for their students, allowing teachers to tailor their approaches to teaching for each individual, maximizing student potential and growth. Customers of IA and PowerSchool will significantly benefit from the investments we are making to provide a superior experience for administrators, teachers, students and parents.”

“Interactive Achievement is excited to become part of the PowerSchool team,” said Jon Hagmaier, CEO of IA. “Combining our intuitive assessment tools, rich standards-based content and robust reporting will augment PowerSchool’s market leading Student Information System so that it can be used to improve teaching, learning and measurement of student outcomes.”

There will be no immediate changes in service and support as we work to fully integrate IA into the PowerSchool team. Specific information about customer interactions with both organizations will be forthcoming including questions about billing, implementation, training and support services.

PowerSchool will host informational webinars in the near future to review the IA acquisition and address specific questions about the go-forward strategy and processes. More information on these webinars coming soon via email.

Teacher Login

State:

District:

Download our newest eBook! For free! Click below:

Click Here to download a free eBook!