In the first blog of this series, we covered how formative assessment data can guide instruction and improve achievement for all students. Element one was specifically devoted to learning outcomes and specifying early and often in the teaching-learning cycle how students will be expected to demonstrate mastery of content.
From here, let’s move on to element two out of the ten that compound student success: the optimal number of items that should be on a formative assessment.
Element Two: Number of Items
Teachers often wonder if there is a preferred number of items per standard and assessment that will grant them enough information to determine student mastery.
So, what’s the magic number?
First, it’s helpful to figure out factors such as the volume of information contained in a standard, the number of learning outcomes, and the levels of complexity within the learning outcomes.
Make sure that the number of items gives the teacher enough information to know that the student has mastered the content, without being redundant.
It will be obvious when there are too many questions on the assessment, because the information will start to feel redundant. A multitude of items that essentially ask the same question (in different ways) serve as practice of a skill for the student, rather than indicator of skill mastery. Less is more.
For example, a math test that asks a student to solve 30 double-digit addition problems does not give the teacher more information about the student’s ability to grasp the skill than an assessment with five to 10 double-digit addition items would. It serves as practice of the skill. This type of repetitive-skills assessment disengages the student and is better off reserved for homework assignments to practice improving the skill.
Good assessment items typically have a level of cognitive complexity assigned to them. Cognitive complexity is the level at which a question measures a student’s knowledge. Two of the most commonly used assessment taxonomies are Bloom’s Taxonomy and Webb’s Depth of Knowledge.
The higher the question is on the taxonomy scale, ranging from one to six for Bloom’s and one to four for Webb’s, the more thinking is required to answer the item correctly. Assign levels of Bloom’s Taxonomy and Webb’s Depth of Knowledge to assessment items so that there is a wide range of items, varying in complexity, on the assessment.
How many items should be used for each level?
As a rule of thumb, for learning outcomes that require a low level of cognitive complexity, such as recall or comprehension of a fact or concept, one or two items is sufficient. Why? Because these types of questions are generally basic and require simple answers: for example, asking kindergarten students to list the vowels in the alphabet.
However, when using a single item to assess a skill, it is highly recommended that educators select an item type that reduces the likelihood of a student simply guessing the correct answer. For more complex concepts and skills, two to four items are typically required.
For example, when asking a true-false question, the student has a 50/50 chance of guessing the correct response. It is risky to rely on the results of a single item to assess student mastery of the skill. A better choice would be to ask a multiple-choice or technology-enhanced item type, like the hotspot item type, which specifies that the student must select all correct answers in order to score correctly.
The type of item used to assess a standard also affects the number of items required to determine mastery. For example, a constructed response item may ask a student a multilayered question that requires him or her to demonstrate several learning outcomes in his or her response, whereas a question that requires a student to solve a problem and select all possible correct answers may require an additional question testing the same skill.
Using a Table of Specifications (TOS) is useful as an assessment planning tool because it helps to map out assessments. A TOS allows you to track the level of cognitive complexity for each of your items so that it’s easy to gauge the “stretch” and rigor of the assessment while tracking the standards and skills assessed. Think of a TOS as an “assessment of the assessment.” It helps to ensure that the assessment ultimately serves its intended purpose.
Below is a sample Table of Specifications used at Staunton River Middle School in Moneta, Virginia.
Ready to read more?! Get the complete eBook: The 10 Elements that Compound Student Success here.