The Next Generation of Assessment: Building the Future of Measuring Student Growth

One of the most reliable constants in the 21st century is change. The field of assessment is no different—educational standards change, cognitive demand measurement theories shift, and technology continues to evolve.

Methods of measuring student knowledge must grow and transform, as well. Adapting to high-stakes summative assessments is one thing, but don’t forget that your interim and formative assessments need to evolve, too. These assessments provide an invaluable opportunity to adjust instruction and support student achievement.

An evolved formative assessment solution can serve as effective practice for the summative tests, even beyond the core summative subject coverage. You need an assessment solution that enables students to practice test-taking skills regardless of the subject area. Simply using the summative consortia’s formative tests will not deliver this breadth of coverage needed for a holistic evaluation of student knowledge. As educators, you know that you get what you measure.

Helping students master the knowledge they need to succeed in the college or career world they will enter is a crucial outcome of education, particularly in the face of increased demands for data-driven decision-making. Having an assessment system than helps you address the next generation of assessment is more important than ever.

But how can you do this when change comes so quickly? Simply put, you must have a flexible system capable of accommodating educational shifts. Let’s explore three main building blocks of next-generation assessments.

1.    Building the Foundation: Quality, Rigor, and Scope

Quality, Rigor, ScopeThe key to any program is a solid foundation—assessment is no different. Educators know that instructional time is at a premium, as is computer lab time. You can’t afford to spend time testing if you can’t get results that truly enable instructional decision making to improve student performance throughout the school year. When considering your benchmark/formative testing footprint (how many tests, which tests, how often, and more), this ensures that your assessments have quality built in from the beginning. Individual assessments should:

  • Support your overall assessment strategy
  • Measure the skills critical to your programs
  • Align to test blueprints that define what you want to measure, including complete coverage of subjects and grade levels (scope) and Depth of Knowledge (DOK) levels

What makes a strong foundation?

It’s tempting to throw technology at your next generation assessments and assume you’re finished. After all, you incorporated the latest technology types and are testing online, right? The problem is that no amount of technology replaces detailed attention to the fundamentals of a good assessment: the quality, rigor, and scope of the test items themselves. But technology is not going to magically transform a bad item (e.g., an item that does not truly measure what you need) into a good one.

We have found the following elements to be critical to a strong assessment foundation:

  • A process to capture and analyze item and test statistics and improve the performance of items and overall measurement value of your assessments based on your analysis.
  • Items used in assessments have well-designed stimulus, stem, and response options that have been edited to address cultural, gender, or other bias.
  • Item and assessment creators have a deep understanding of the instructional shifts determined by standards (such as Common Core, College & Career Readiness, or your state’s standards).
  • Assessments have a comprehensive and balanced coverage of DOK levels 1, 2, and 3 using intentional scaffolding in step with the design of your standards.
  • Assessment portfolio includes an option for performance-based tasks (PBTs) with rubrics to assess DOK 4.
  • Reading passages include authentic texts.
  • Reading passages cover informational texts and literary texts, all of which provide opportunities for comparing, contrasting or integrating information.
  • For fixed-form, formative assessment, appropriate tools and quality item banks are included that enable you to create tests in all subjects and grade levels within the district’s assessment strategy.

A word about performance-based tasks (PBTs)

PBTs are typically required to fully measure a DOK 4 level of understanding. Although it is possible to construct selected-response questions to measure synthesis, there are other aspects of DOK 4 that are more difficult to capture with traditional test questions.

The biggest caveat about PBTs—indeed, about any DOK 4 measurement—is that harder is not necessarily better. This is an erroneous perception influenced by the higher DOK number. Unfortunately, this perception ignores the more accurate measure of item difficulty provided by empirical evidence.

Without also assessing DOKs 1–3 that lead to DOK 4‑level understanding, you may not know why a particular student was unable to complete the task acceptably. Understanding the foundation of their knowledge, measured as they grow through the earlier DOK levels, provides critical context to student performance on longer-term DOK 4-level tasks.

2.    Leveraging Emerging Technologies: Item Types and Implementation

Item Types and Implementation81% of K–12 leaders view standards-based online assessments as a district priority.”[1]
Center for Digital Education

Look for opportunities to leverage technological advancements in assessment, such as technology-enabled items that use audio and video components in addition to technology-enhanced items.

Sounds like a great idea, right? And it is, but you and your school have to be ready.

63% of schools do not have enough bandwidth to meet the current needs for digital learning and 99% do not have the bandwidth necessary over the next five years.”[2]
Education Superhighway

One of the reasons technology-enabled or –enhanced items are not widely used is that schools need significant available bandwidth to support assessments that include them. These items cannot be used for equitable student assessment until schools can guarantee parity (i.e., all students can take the test the same way).

Emerging technology to support assessments encompasses a number of different concepts. Some of the terms are used loosely and cover too many potential options to be helpful. In addition, some of these options do not have paper equivalents, leading to assessment inequity when you must also support schools that may not have a robust technology infrastructure yet. Leaping into technology without considering its effect on all of your schools may make it significantly more difficult, if not impossible, for these schools to test.

The idea of leveraging emerging technology to support learning is attractive. Unfortunately, there’s a lot of confusion over what readiness to support digital learning means. This includes an understanding of exactly what schools intend to support, particularly around using technology in assessment.

Let’s start by defining some terms we have found useful when discussing the use of emerging technology in assessment:

Item Technology Type

Characteristics

Technology-delivered items

  • Typically delivered online, but can be delivered on paper without affecting item parity (i.e., the online and paper versions are identical).
  • Students click or touch an answer choice on the screen or fill in a blank using their keyboards.
  • Scored automatically.
  • Examples are:
    • Multiple choice
    • True/False
    • Short answer
    • Constructed response (essay)[3]

Technology-enabled items

  • Rely on technology to deliver audio or video question stems and/or response choices, although paper equivalents may be available.[4]
  • Students view the video or listen to the audio, then select their answer accordingly.
  • Scored automatically.
  • Examples are:
    • Multiple choice
    • True/False
    • Short answer
    • Constructed response (essay)[3]

Technology-enhanced items

  • Students use technology action to answer questions. For example:
    • Drag provided answers choices to the correct list
    • Color in image segments
    • Highlight text or images
    • Select a hotspot on an illustration

These items have no easily gradable paper equivalent.

For details on the kinds of issues you need to consider when evaluating the use of technology-enriched assessments, see our checklist: Are You Ready for Technology-Enriched Assessments?

3.    Providing Value-Added Features and Functions: Accessories and Accommodations

Accessories and AccommodationSometimes, one enhancement to an assessment program can make a big difference to the ease of implementation and effective, ongoing usage.  For example, a strong online assessment employs smart algorithms that look for testing irregularities (such as a student simply selecting a, a, a, a) and notifies educators when student engagement levels may not be optimal for the most accurate assessment of student performance.

Simple features, such as controlling how passages and items display (split screen right/left versus the passage on top and item below) or the ability to digitally mask answer options to eliminate them, make a significant difference.

In addition, consider whether you need a secure test client that restricts keyboard functions and web browsing to ensure students are focusing on the test and are not using the computer for other purposes.

Beyond the enhancements, core functionality components are needed to support special populations, such as screen reader and Braille support. This is another area where even simple solutions such as being able to adjust the size of the text displayed can improve student performance.

In particular, adaptive tests should allow you to adjust each student’s starting grade level for that subject’s test (while retaining the student’s designated grade level) to more closely match their initial abilities. Meeting students where they are, rather than forcing them to a particular starting line, saves time, reduces frustration, and increases engagement—and ultimately success.

See the provided checklists for details about how Scantron Achievement Series® and Performance Series® support value-added features and functions.

As you choose your next-generation assessment partner, consider not only the support offered today but also the support planned for the future. If your district plans to use PARCC or SBAC consortium summative assessments, look at how closely your interim assessment partner’s test functionality and content quality, rigor, and scope (see Building the Foundation: Quality, Rigor, and Scope, discussed earlier) maps to these exams.

Although your interim exams should not necessarily be merely “practice opportunities” for the high-stakes summative tests, helping students become familiar with the item types and online testing experience is valuable. You may even want to extend this familiarity through all subjects, not just the ones covered on summative exams, so look for a partner who can provide this experience broadly across subjects. See our checklist for details on how Scantron Assessment Solutions align with PARCC and SBAC support categories.

How can Scantron help?

Scantron has an extensive track record of providing computer-adaptive (Performance Series) and fixed-form formative (Achievement Series) assessment solutions—in addition to high-quality item banks and a world-class assessment services team—to help thousands of customers succeed in developing assessments that measure and accelerate student growth. We’ve delivered billions of assessments since 2010—more than 100 million of them online. We are always exploring new assessment methodologies to help you move the needle forward.

Scantron has led the assessment field for decades. We were one of the first partners to align our existing computer-adaptive test to the Common Core State Standards (CCSS)—and we were one of the first to provide detailed reports to align results to the most granular standards level (not just to the strand). But we didn’t stop there. We were the first in the market to develop brand-new content expressly to support CCSS. We’ve been evolving this content ever since, increasing our understanding of instructional shifts and balanced cognitive rigor, and reflecting that understanding in our item banks and tests.

In addition to Scantron’s bench strength in Common Core assessment, the product infrastructure is flexible enough to support clients using state-specific or College & Career Readiness standards as well as integrated blends of state and Common Core Standards. Our Assessment Development and Psychometric Services team also provides support packages for Career Technical Education assessment. “This is a key requirement for supporting organizations who work with educators in different states with different requirements,” says Jay Whitchurch, Scantron executive vice president.

We’ve helped districts with workshops and consulting for content development processes as well as working with customers to provide high-quality, standards-aligned content. We’ve validated educator assessment efforts and demonstrated positive impact to student learning with research studies ranging from determining growth targets to predictive validity studies. 

Beyond assessment vehicles, Scantron offers Scantron Analytics, powered by Qlik®—one of the foremost analytics pioneers. Scantron Analytics presents up-to-date information through highly visual, easy-to-understand dashboards. By storing all information in memory, Scantron Analytics delivers powerful analytics without the need for a separate data warehouse. Using information you’re already collecting, sourced from a wide variety of educational systems, Scantron Analytics displays easy-to-read, graphical dashboards and data visualizations. Important trends and previously hidden connections jump out so you can spend your time developing creative solutions instead of trying to make sense of rows and columns of numbers.

Whatever the assessment assistance you need, Scantron has the products, tools, services, and expertise to help you ensure that you have the right program for your students. Our award-winning web-based software, combined with our comprehensive suite of assessment services, help you get the most out of your assessments and results. We hope this article helps you to identify considerations important to your assessment requirements and to see how Scantron can meet you where you are and help you get to where you want to be.



[1]     "Next-Generation Assessments for K-12." How to Choose Comprehensive Next-Generation Assessments for K-12. Center for Digital Education, 22 Oct. 2014. Web. 23 Feb. 2015. <http://www.centerdigitaled.com/paper/How-to-Choose-Comprehensive-Next-Generation-Assessments-for-K-12.html>.

[2]     "The Connectivity Gap - EducationSuperHighway." EducationSuperHighway. Education Superhighway.org, n.d. Web. 24 Feb. 2015. <http://www.educationsuperhighway.org/the-connectivity-gap/>.

[3]     Most tests provide a text entry area for the student to use. Educators must score the answers manually.

[4]     Paper equivalents are very labor intensive (e.g., a proctor would read items otherwise delivered via computer audio). In addition, if you were creating an item based on a passage, the proctor would not easily be able to “replay” segments as students needed to refer back.