Q: How does ESSA change the game for assessment?

A: Under NCLB, a single score represented student achievement. While there is certainly value in summative assessment, a more holistic picture of student accomplishment reflects both on-grade achievement and overall growth during the year, as well as considering non-traditional factors. ESSA encourages a broader view of educational success.

Q: What kinds of assessment can now be considered?

A: Summative assessment still plays a role, but measuring student achievement is no longer limited to a single test at the conclusion of the school year that defines your success and determines your funding. ESSA expands accountability measures to include both achievement and growth, which can help show that even students who are not yet on grade level have improved—sometimes dramatically—from where they started the school year.

Q: What does this mean for traditional, interim measures?

A: ESSA encourages multiple, comprehensive assessments, allowing for interim measures throughout the year. This approach not only provides a more holistic, accurate picture of student ability but also allows for important data to be provided to educators throughout the year (as opposed to after the year is completed). Interim assessments can also be aligned more readily to a state’s pacing guides to naturally assess content that is being taught, which can help avoid the “teach to the test” mentality. In addition, interim testing can provide a comprehensive, summative result that makes it easier to adjust instruction to better serve student needs during the year. Further, ESSA supports non-traditional assessments such as student perception or school climate surveys. These provide a much broader view of additional factors that lead to school success.

Q: Can’t you just take all the scores you have and create an average?

A: Intuitively you would think so, but it’s a bit more complicated than that. The key is to make sure the resulting summative statistic is psychometrically sound and educationally defensible. You need to carefully consider a variety of factors, such as the test scaled score, how growth is captured vs on-grade performance, how non-traditional measures such as student surveys fit in, and how all of that is included in a holistic set of accountability data. In addition, you need to be able to disaggregate your data to demonstrate success across different student populations. It’s not going to be easy, but we’re going to get a much better picture of student and school achievement.

Q: How is Scantron addressing this issue?

A: Carefully. We want to ensure we have a solution that is sound, defensible, and useful. Just as when we launched our computer-adaptive assessment many years ago, we’re making sure our new approaches are valid, fair, and serve the needs of students and educators at all levels. We’ve built our reputation on sound, well-researched assessments that provide real value, and we’re not going to change that now.