Item Types to Include in Your Assessment

By | Assessment, eBooks & Whitepapers | No Comments

In the first blog of this series, we covered how formative assessment data can guide instruction and improve achievement for all students. Element one was specifically devoted to learning outcomes and specifying early and often in the teaching-learning cycle how students will be expected to demonstrate mastery of content. Element two focuses on the number of items that should be on a formative assessment.

So now we move to element three which is what item types should be included in your assessments.

Element Three: Item Types

Element-Three-Item-Types_post image
Teachers and students are no longer stuck with using only multiple-choice items with four answer options.

Innovative technology solutions allow students the opportunity to perform actions other than simple clicks of a radio button to select the correct answer. Twenty-first-century assessments offer many types of items for teachers to choose from when creating a test. Certain content lends itself to specific item types.

Take a look at some of the most readily available “technology-enhanced” item types:

Enhanced Multiple Choice
Spices up the traditional four-option multiple-choice item by increasing the number of answer options provided.

Hot Spot
Student is required to select all correct answer options. When assessing content that has a group of factors, indicators, or elements, using a hot spot item allows students to select more than one correct answer as the right choice.

Drag and Drop
Student drags answers or questions into appropriate area on a graphic. For example, when assessing a concept that requires students to be able to identify and label parts of a whole, the drag-and-drop item type is most appropriate.

Fill in the Blank
Student types in his or her answer. A fill-in-the-blank item type is a great choice when you want students to generate the answer instead of selecting from a list of possible answers. For example, fill-in-the-blank item types are great for assessing measurement and computation skills in math and science.

Constructed Response
Student is required to create his or her answer in the space provided.

Constructed response items that require students to write their response to a prompt are very valuable in asking students to evaluate, analyze, and create extended written responses.

Constructed Response with Equation Editor
Assessments that provide a variety of item types and allow students to show what they have learned in several formats keep them on task and engaged.

Ready to read more?! Get the complete eBook: The 10 Elements that Compound Student Success here.

This is the Number of Items Your Formative Assessment Should Contain

By | eBooks & Whitepapers, Formative Assessment | No Comments


In the first blog of this series, we covered how formative assessment data can guide instruction and improve achievement for all students. Element one was specifically devoted to learning outcomes and specifying early and often in the teaching-learning cycle how students will be expected to demonstrate mastery of content.

From here, let’s move on to element two out of the ten that compound student success: the optimal number of items that should be on a formative assessment.

Element Two: Number of Items


Teachers often wonder if there is a preferred number of items per standard and assessment that will grant them enough information to determine student mastery.

So, what’s the magic number?

First, it’s helpful to figure out factors such as the volume of information contained in a standard, the number of learning outcomes, and the levels of complexity within the learning outcomes.

Make sure that the number of items gives the teacher enough information to know that the student has mastered the content, without being redundant.

It will be obvious when there are too many questions on the assessment, because the information will start to feel redundant. A multitude of items that essentially ask the same question (in different ways) serve as practice of a skill for the student, rather than indicator of skill mastery. Less is more.

For example, a math test that asks a student to solve 30 double-digit addition problems does not give the teacher more information about the student’s ability to grasp the skill than an assessment with five to 10 double-digit addition items would. It serves as practice of the skill. This type of repetitive-skills assessment disengages the student and is better off reserved for homework assignments to practice improving the skill.

Cognitive complexity

Good assessment items typically have a level of cognitive complexity assigned to them. Cognitive complexity is the level at which a question measures a student’s knowledge. Two of the most commonly used assessment taxonomies are Bloom’s Taxonomy and Webb’s Depth of Knowledge.

The higher the question is on the taxonomy scale, ranging from one to six for Bloom’s and one to four for Webb’s, the more thinking is required to answer the item correctly. Assign levels of Bloom’s Taxonomy and Webb’s Depth of Knowledge to assessment items so that there is a wide range of items, varying in complexity, on the assessment.

How many items should be used for each level?

As a rule of thumb, for learning outcomes that require a low level of cognitive complexity, such as recall or comprehension of a fact or concept, one or two items is sufficient. Why? Because these types of questions are generally basic and require simple answers: for example, asking kindergarten students to list the vowels in the alphabet.

However, when using a single item to assess a skill, it is highly recommended that educators select an item type that reduces the likelihood of a student simply guessing the correct answer. For more complex concepts and skills, two to four items are typically required.

For example, when asking a true-false question, the student has a 50/50 chance of guessing the correct response. It is risky to rely on the results of a single item to assess student mastery of the skill. A better choice would be to ask a multiple-choice or technology-enhanced item type, like the hotspot item type, which specifies that the student must select all correct answers in order to score correctly.

The type of item used to assess a standard also affects the number of items required to determine mastery. For example, a constructed response item may ask a student a multilayered question that requires him or her to demonstrate several learning outcomes in his or her response, whereas a question that requires a student to solve a problem and select all possible correct answers may require an additional question testing the same skill.

Using a Table of Specifications (TOS) is useful as an assessment planning tool because it helps to map out assessments. A TOS allows you to track the level of cognitive complexity for each of your items so that it’s easy to gauge the “stretch” and rigor of the assessment while tracking the standards and skills assessed. Think of a TOS as an “assessment of the assessment.” It helps to ensure that the assessment ultimately serves its intended purpose.

Below is a sample Table of Specifications used at Staunton River Middle School in Moneta, Virginia.


Ready to read more?! Get the complete eBook: The 10 Elements that Compound Student Success here.

Interactive Achievement Joins the PowerSchool Family

By | Corporate | No Comments


PowerSchool Group LLC (“PowerSchool”) announced the acquisition of Interactive Achievement (“IA”), an award-winning provider of standards-based instructional improvement systems for school districts that create positive change in student education.

Founded in 2006 by childhood friends, Jon Hagmaier and Matt Muller, Interactive Achievement provides a set of powerful intuitive assessment tools with unrivaled reporting and data analysis capabilities. Acquiring IA enables PowerSchool to expand its core Student Information System into the classroom, using innovative tools for directly assisting in improving student outcomes, teacher effectiveness, and administrator reporting abilities in the digital classroom.

PowerSchool is the #1 leading provider of K-12 technology solutions used by more than 40 million users and over 15 million students in 70+ countries. We power school operations for over 6,000 school districts enabling secure, compliant best-in- class processes and insights into school data. Additionally, PowerSchool provides industry-leading enrollment solutions for large and small public, charter and independent school districts. This acquisition enhances PowerSchool’s solutions for teachers, providing innovative digital classroom capabilities and enabling a single user experience for managing attendance, grading, assignments, assessments and analytics. With IA, PowerSchool will deliver new opportunities for customers to measure achievement, utilize rich content, and improve student learning outcomes in the K-12 educational community.

“PowerSchool has the largest K-12 education community with over 6,200 school districts,” said Hardeep Gulati, CEO of PowerSchool. “Our customers are looking to adopt digital classroom tools that empower teachers to easily create personalized and data-driven assessments that directly and measurably improve student performance. By combining formative assessment with PowerTeacher Gradebook, we are providing a comprehensive tool for teachers to manage all critical aspects of their classroom activities. With our best-in-class solutions and embedded analytics, educators can now spot areas of need for their students, allowing teachers to tailor their approaches to teaching for each individual, maximizing student potential and growth. Customers of IA and PowerSchool will significantly benefit from the investments we are making to provide a superior experience for administrators, teachers, students and parents.”

“Interactive Achievement is excited to become part of the PowerSchool team,” said Jon Hagmaier, CEO of IA. “Combining our intuitive assessment tools, rich standards-based content and robust reporting will augment PowerSchool’s market leading Student Information System so that it can be used to improve teaching, learning and measurement of student outcomes.”

There will be no immediate changes in service and support as we work to fully integrate IA into the PowerSchool team. Specific information about customer interactions with both organizations will be forthcoming including questions about billing, implementation, training and support services.

PowerSchool will host informational webinars in the near future to review the IA acquisition and address specific questions about the go-forward strategy and processes. More information on these webinars coming soon via email.

The 10 Elements that Compound Student Success

By | eBooks & Whitepapers, Formative Assessment | No Comments


Building Formative Assessments that Guide Instruction

Educators’ mission is to help every student learn and grow. They aim to identify misunderstandings and close learning gaps before they turn into deficits that will require extensive remedial instruction. They do all of this with the ultimate goal of preparing students for standards mastery. Where should educators look to begin these tasks?

The answer is in the data. Getting the data requires building great assessments that will extract the data educators need to make informed decisions. Every student can learn and grow when his or her teacher has accurate information about where to focus instruction.

“When implemented well, formative assessment can effectively double the speed of student learning.”

How Can Formative Assessment Data Guide Instruction?

Assessing formatively gives teachers accurate information about where to focus their instruction to help every student grow and achieve. It allows educators to gather the valuable feedback needed about what students know and don’t know during the instructional cycle so they can use the data to modify lesson plans, personalize instruction, and improve achievement for all students.

In our latest eBook, Dr. Sally I’Anson, Director of Professional Development at Interactive Achievement, and Dr. Kendra Boykin, an independent education consultant, provide invaluable insight on:
✓ How to build formative assessments that produce an accurate picture of student learning
✓ Guidelines for formative assessment formatting that will increase the quality of student data
✓ How to ensure that assessments are aligned to learning outcomes
✓ Why it’s necessary to make continual improvements to assessments

Why Assess?

In the broadest sense, the purpose of formative assessment is so the teacher can determine what the student knows and where he or she still needs help. On the smallest scale, teachers are assessing specific skills and standards. That’s why it’s essential to decide the purpose (skills and content) of the assessment before the design process can begin.

Creating a great assessment is an iterative process that takes considerable effort, skill, and time. It’s a process that calls for constant and incremental improvements. With that in mind, we’re devoting this blog series to 10 essential elements that will help teachers create assessments that produce formative data to guide instruction and compound student success.

Element One: Learning Outcomes


Specify early and often in the teaching-learning cycle how students will be expected to demonstrate mastery of the content.

Learning outcomes are what the student should be able to demonstrate as a result of the instruction. Making it clear to students how they will be expected to demonstrate their knowledge is a vitally important element of the assessment process.


When students are informed about how their learning will be assessed throughout the teaching-learning cycle, student performance is improved. For example, if students are provided with a rubric containing detailed explanations of student performance for each level, they know exactly what the teacher expects.

Knowing what the teacher expects allows the student to set goals and work toward meeting teacher expectations. Expecting students to guess what will be on the assessment and how they will be assessed is not an accurate or reliable record of student mastery.

Quick Guidelines for Scoring:

  • Be as specific as possible when sharing assessment scoring procedures with students.
  • Provide students ample opportunity to practice a skill or concept with items that are similar to those that will appear on the test.
  • Provide exemplar answers so that students understand what excellence looks like.
  • If using a rubric to score student work, share it with students early in the teaching-learning cycle.
  • Set students up for success by ensuring they know what the learning targets are early and often.


Not all assessments are used for grading purposes. This is particularly true for many formative assessments. However, schools and parents require teachers to grade student work. Grades give information about how a student is progressing or has progressed through a curriculum.

Example 1: A teacher could explain that a student correctly answered nine questions on a 10-question assessment, all items were weighted equally, and the assessment had a value of 100 points, so the student was given a numerical grade of 90 and a letter grade of A-.

Typically, the school division determines the grading scale.

Example 2: A teacher may have used a rubric with each row and column in the rubric having a numerical value. These values were totaled and the student received a grade based on thresholds of points awarded (e.g., 10-20 pts = D, 21-30 = C, 31-40 = B, 41-50 = A). The critical element of grading is that the teacher issuing the grade must be able to articulate clearly how the grade is specifically linked to student learning target outcomes on an assessment.

Get the complete eBook: The 10 Elements that Compound Student Success here.

Formative vs. Summative Assessment: What’s the Difference?

By | Assessment, Instructional Improvement Systems | No Comments


The word testing seems to be getting a bad rap lately.

The problem is that some parents and teachers confuse summative testing with formative assessments. So, what’s the difference?

Formative assessments are periodic, continual checks for understanding that are used for diagnostic purposes. Some examples include: exit tickets, partner quizzes, and self-assessment.

Tests consist of summative and high-stakes examinations. These include: end-of-unit tests, end-of-term or semester exams, state assessments, district benchmark or interim assessments, and scores that are used for accountability for schools (AYP) and students (report card grades).

Good teachers have always assessed student work, but with the pressure of SOLs, formative assessments are mistakenly being thought of as tests. If used properly, formatives help guide instruction.

Let’s take a closer look.

The intention behind formative assessments is not to use them to be graded or for teacher evaluations.

Teachers should not prep or review before giving an assessment. Formatives simply give teachers an understanding of what their students know before and during instruction, for the purpose of guiding teachers’ lesson planning and adjusting instruction.

I love the soup-tasting analogy to illustrate the difference between formative and summative assessment. With formative assessment, the chef is constantly tasting the soup in order to check for under or over seasoning, missing ingredients, etc. The chef makes continual adjustments throughout the preparation process to create a rich and well-balanced soup. With summative assessment, the process of preparing the soup has been completed and it’s ready to be “tested” by the guests.

Image credit: Bryan Mathers

Naturally, when you continually assess students for learning and mastery, they end up becoming better prepared for summative testing (which is a necessary part of education).

Here a great recap to quickly gauge the difference:

Image credit: Educatoral Technology and Mobile Learning

Are there any ways you use to determine the differences between formative and summative assessment? Let us know in the comments section below!

Unpacking and Deconstructing the Standards

By | Assessment, Instructional Improvement Systems | No Comments

unpacking-standardsStandards are often written as complex, overarching statements that can be interpreted many different ways. In order to teach to the standards effectively and gather accurate data, it’s important for teachers to break the standards apart to form a deeper understanding. Working to unpack the standards can be a fun and meaningful process.

It is important for teachers to collaborate with each other during the unpacking process and discuss how to challenge students to make sure they have met the expectations of the standards.¹

Teacher Collaboration

Deconstructing the standards collaboratively builds the capacity of all teachers involved. Not only does the process increase the ability of teachers to teach a standard effectively, but it also increases their ability to accurately assess a standard. Because each teacher brings his or her understanding of the strand or standard to the discussion, these rich conversations serve to deepen the comprehension of the standard for all teachers.

Furthermore, the process by which teachers create a shared understanding of desired learning outcomes for standards and design assessment items to measure them is a near-perfect example of meaningful, job-embedded, professional learning.

Develop Learning Targets Based on Standards

By developing learning targets that are accurately derived from standards, teachers will have a better understanding of what to teach so they can plan instruction that appropriately addresses the knowledge and skills of the standards.² In addition to identifying and understanding learning targets, deconstructing standards also helps teachers to identify and understand the level of cognitive complexity the standards demand.

Whether teachers use Bloom’s Taxonomy or Webb’s Depth of Knowledge to assign levels of cognitive complexity to assessment items, teachers need to have a shared and thorough understanding of how that level informs how content is taught and assessed. Much of the research on rigor provides teachers with strategies and practices to ensure that standards are taught at their intended level of cognitive complexity.


Do you have any other ways to unpack standards? Share your strategies with us in the comments below!

¹ (Tobiason, Chang, Heritage, & Jones, 2014
² (Tobiason et al., 2014)

16 Guidelines for Writing High-Quality Assessment Items

By | Formative Assessment | No Comments


Why Does Writing High-Quality Items Matter?

Assessment is an aspect of the rigorous formative assessment cycle that requires precision to be effective. The items, which make up an assessment, are the foundation upon which teachers build inferences about student understanding. Properly written items produce accurate data about student comprehension that guide teachers to make sound instructional decisions to sustain and improve student learning. In addition to providing evidence about the concepts students have difficulty understanding, well-written assessments effectively help teachers evaluate the next steps they should take in the instructional process.*

Drafting Good Assessment Items

There are general guidelines for constructing high-quality items that improve the overall ability of the item to measure what it is intended to measure and consistently gather accurate results. The desired data produced by an item is a crucial factor influencing the selection of item type and format. The recommendations below are accepted guidelines to assist item writers in their quest to create high-quality items. In addition to these guidelines, standardized state-assessment providers typically have assessment-design rules primarily related to item formatting that are unique to their tests.

16 Guidelines for Writing High-Quality Items:

  1. Provide Clear Directions
    The student should know what to do to successfully answer the item. One of the first steps teachers should undertake when reviewing items is to ensure that each item has clear directions that explicitly state what students are expected to do. Clear directions increase item fairness and validity.¹
  2. Assess a Single Idea or Problem
    If an item is written asking more than one question, it is difficult or impossible for the teacher to determine from the results to which question the student was responding.
  3. Include Readable Graphics
    In order to produce intended results, all graphical images must be clear, uncluttered with unnecessary information, and readable. When reviewing the technology-enhanced components of the items, it is important for teachers to ensure that the media and graphics are essential for answering the item.²
  4. Use Clear and Precise Language
    Words that are not pertinent to the item should be eliminated.
  5. Are not Interdependent
    Items should not provide clues or answers in the item stem or distracters to other items on the assessment.
  6. Do not Use Difficult or Uncommon Names
    Students can be distracted by the spelling or pronunciation of uncommon names. Names that represent other things, such as April or Forest, can also reduce the reliability of an item.
  7. Avoid Redundancy in the Distracters
    Item writers should include words in the stem that would otherwise be repeated for each response option.
  8. Use as Little Punctuation as Possible
    Specifically, contractions are often misunderstood and should be eliminated.
  9. Contain New Language
    Avoid using phrases or words taken directly from standards, lectures, or studied text. This practice increases the level of cognitive complexity of the item and engages in higher-order thinking.
  10. Have the Blank at the End of a Statement
    Using this method allows students to focus on the question rather than figuring out how to fit the appropriate distracter into the statement.
  11. Contain Distracters that are Grammatically Consistent with the Question Stem
    Checking each distracter to ensure that it is grammatically correct when connected to the item stem avoids clueing students to the correct option.
  12. Employ the Third Person
    When items use “you,” students tend to personalize their response and choose the distracter that represents their opinion or idea versus what the item is asking.
  13. Ensure Distracters are Equally Attractive Answer Choices
    Distracters should be plausible and represent common student misconceptions or misunderstandings.³
  14. Contain Distracters that are Equivalent in Length, Style, and Structure
    This is especially true when distracter choices continue on to the second line of text. Align distracters so that they are all one line of text or all two lines of text.
  15. Avoid Using “All of the Above” or “None of the Above” as Distracters
    Students gravitate toward choosing these selections regardless of the correct distracter choice.
  16. Avoid Using Absolute Terminology
    Some students tend to focus on finding exceptions to these statements including these terms rather than on answering the question.

If you want to expand on these strategies, I highly recommend downloading our free eBook about writing high-quality assessment items so that you can begin reaping the benefits from this practice.

Do you have any other strategies you use to write high-quality assessment items? We’d love to hear about them, please share in the comments below!

* (Turner, 2014)
¹ (Lakin, 2014)
² (Scalise & Gifford, 2006)
³ (Downing & Haladyna, 2006; Haladyna et al., 2010)

The Strong Correlation between Interactive Achievement’s SGAs and SOL Results

By | Student Growth Assessments | No Comments

Correlation-between-Interactive-Achievement’s-SGA- and-SOL-Results
Interactive Achievement is excited to release a new report and white paper from Advanced Education Measurement (AEM) that demonstrates the correlations between Interactive Achievement’s Virginia Student Growth Assessments (SGAs) and the Virginia Standards of Learning (SOL) assessments taken at the end of the semester or school year.

The Virginia SOLs are extremely important for the divisions as they evaluate student achievement and success. Educators need reliable information on the progress of students in the months leading up to the SOL before they actually sit for the assessments. Interactive Achievement offers the SGAs for Math, Reading, History, and Science and recommends schools administer them twice; once in the fall and once in the spring to identify student strengths as well as gaps in student knowledge.

Recently, Interactive Achievement contracted with AEM to better understand the connection between SOLs and the corresponding SGAs. AEM first distributed its full results through a white paper available from AEM, An Analysis of the Relationship between Interactive Achievement’s Student Growth Assessments to Virginia Standards of Learning Assessments in the 2013-14 School Year. To facilitate the understanding of the white paper, in partnership with AEM, Interactive Achievement is releasing a report that summarizes the critical findings of the white paper and provides a ready-to-use guide for educators to use the SGAs as a guide to understanding likely student performance on the SOL.

The key findings are:

  • IA SGA results are highly correlated with Virginia SOL results and predict SOL performance extremely accurately.
  • The report includes a chart that educators can use to gauge the likelihood that individual students will pass the SOL based on their IA SGA results.

We’ve compiled our findings into a simple report and a detailed, data-rich whitepaper for you to explore. Check them out here or access it with the download button below.


The Importance of Writing Quality Items in Assessment

By | Assessment, Formative Assessment, In the Classroom | No Comments

writing quality items

The Importance of Writing Quality Items

Assessment is an aspect of the rigorous formative assessment cycle that requires precision to be effective.  The items, which constitute an assessment, are the foundation upon which teachers build inferences about student understanding.  Properly written items produce accurate data about student comprehension that guide teachers to make sound instructional decisions to sustain and improve student learning.  In addition to providing evidence about the concepts students have difficulty understanding, well-written assessments effectively help teachers evaluate the next steps they should take in the instructional process.¹

Why Write Items?

Writing assessment items is a fantastic way for teachers to study and explore their grade level content area standards.  The creative process produces immediate results for teachers, informing student learning and teacher instruction grounded in the tested standards.  Creating the venue for teacher dialogue, teachers can learn from each other as they discuss the intent and nuances of the standards.  Teachers develop a deeper understanding of the standards from these conversations. Writing items also allows teachers to tailor items to identify common learning gaps.  Finally, the item writing process is complete when teachers meet to discuss the results and examine how the items functioned.

How to Improve Your Skills

Writing assessment items benefits from skill and practice.  The process requires some knowledge of national, state and local standardized assessments and their item types, distractors and formatting.  Additionally, knowledge of the psychology of student behavior related to assessment is very helpful when writing items.  Developing a checklist to ensure common formatting and stylistic elements are consistent improves the quality of items and limits student error for non-content-related errors. We’ve created an awesome infographic to help you take your item-writing skills to the next level with specific practices to follow (and some to avoid!). Download the Nuts & Bolts of Good Item Writing Infographic here.
¹(Turner, 2014)

IA’s Pledge: Safeguarding Student Privacy

By | Students | No Comments

Over the past decade, classroom technology has improved the learning environment for every stakeholder in student education. With technology comes additional challenges including protecting student data and privacy.

At Interactive Achievement, we take protecting student data as seriously as seeing the children of our future succeed. Educators and families have trusted IA to support their educational needs and safeguard student privacy for years, and for that we are honored. To reinforce our commitment, we recently signed the Student Privacy Pledge to protect student information.

As signatories of the pledge, we promise to:
• Use data for authorized education purposes only.
• Not sell student personal information.
• Not use or disclose student information for behavioral targeting of advertisements.
• Not change privacy policies without notice and choice.
• Enforce data retention best practices.
• Support parental access to, and correction of, their children’s information.
• Maintain a security program designed to protect the security of data against risks.
• Be transparent about collection and use of student data.

The commitments stated above are not a change in the way we do business at Interactive Achievement. The Student Privacy Pledge is our public commitment to responsible data practices, and we invite stakeholders to learn more at http://studentprivacypledge.org/.

Teacher Login



Download our newest eBook! For free! Click below:

Click Here to download a free eBook!