A significant instructional change has occurred within the last decade: educators are focusing on using data to inform instruction, supported by a culture that is increasingly data driven.

In part, this change has been caused by an increasing volume of data, readily available in a digital format. In addition, data is available more quickly than ever before—teachers can access formative assessment results just seconds after testing is finished, enabling them to use test results to guide their instruction.

More and more, teachers are relying on this information as part of their daily practice. According to Teachers Know Best: Making Data Work for Teachers and Students, a report released recently by the Bill and Melinda Gates Foundation:

“Virtually all teachers (93 percent) regularly use some form of digital tool to guide instruction. But more than two-thirds of teachers (67 percent), across a vast range of schools nationwide, say they are not fully satisfied with the effectiveness of the data or the tools for working with data that they have access to on a regular basis.”[1]

This report further details the top challenges facing educators trying to function in a data-driven environment:

  • Data volume is often overwhelming and provided in many disparate sources, making it hard to interpret.
  • Data formats are incompatible with each other, requiring significant manual work to format them appropriately.
  • Data sources do not provide consistent information, leaving educators with difficult decisions on what source to believe.
  • Data arrives too late to make any difference in instruction.

How can educators overcome such issues and get more value from their data? The answer is to move from viewing data in isolation, as a series of static reports, to viewing data that’s connected and dynamic. Modern analytical technology makes this transformation possible as it can quickly aggregate data from a variety of sources, forming relationships and interconnections between students, classes, assessments, and grades. You can then explore these relationships through an intuitive interface, leading to the discovery of new insights that were previously hidden.

We will explore five key ways that reporting and analytics differ, but first, it’s important to understand the difference between data, information, and knowledge.

Getting From Data to Knowledge

The terms “data” and “information” tend to be used interchangeably, but there’s an important distinction between them—and a third term, “knowledge,” that’s often overlooked.

Data, at its heart, is simply a collection of unorganized or barely organized facts. Think of a spreadsheet—lots of numbers, lots of data, but not much context or interpretation. Computers are great at producing and interpreting data, but people require something more. Consider the following data point: A student scores 45 on a test. Based on this alone, we have no way to know if that score is good, bad, or indifferent.

Information is the organization and presentation of data so that it becomes meaningful—often achieved visually. Information lets us see trends and make connections. Think of charts and graphs—the pictures tell a story about the data that you can understand more easily. Continuing the example above, if you are told that a student has scored 45 out of a possible 50 points, you now have some context for how they have done. More information may still be helpful—for example, was this an improved result from the last test? However, it’s still only an improvement on a single number.

Knowledge is information you can use, which dramatically increases its value. With knowledge, we understand the information we see, absorb it, connect it to other information we have, and make decisions based on it. Using the same example, if the student scored 45 out of 50 on last week’s unit post-test but had scored 35 out of 50 on the unit pre-test and 45 was the top score in the class, now we know something about the student with which we can make decisions—or, in this case, congratulate ourselves on a job well done for this student.


The progression from data to information to knowledge corresponds to three different levels of technology:

  • Basic reporting

Basic reporting often just gives us data. For example, the table on the right shows the distribution of test scores. It’s not terribly visual nor is it easy to extract insights from the rows and columns of numbers.

  • Charts and graphs

Charts and graphs help turn data into information, which can lead to simple insights. For example, the chart below shows the same data from the table as a chart.

Here we can easily see a pattern: Reading results show a distribution of scores that is skewed to the right, with the majority of students scoring between 5 and 8 on the test. But that’s the only pattern we can see. We don’t know how students did last year and we can’t correlate these scores to students’ grades or other assessments. In fact, we can’t see anything other than the score distribution. While we have an insight, it only answers one question. Any other answers still elude us—although we may still have plenty of questions.

  • Discovery Analytics

Discovery analytics transforms information into knowledge. By combining multiple data sources in a single location, you can quickly see associations between academic results and behavioral incidents.

Because discovery analytics is dynamic, you can interact with charts by drilling down and across segments to spot correlations, differences, and patterns, giving you answers as fast as you can ask questions.

Now, let’s explore the 5 differences between reporting and analytics in more detail.

1. Data Sources

One of the things we hear regularly from districts is the difficulty they face consolidating all of their data in one place. Assessment scores are captured in multiple formats from multiple vendors, which seldom integrate into central district locations such as the Student Information System (SIS). Individual Education Plans (IEPs) often reside in separate systems—and Learning Management Systems (LMS) can be a third data island.

Most reporting environments only support the data and information needs of educators, often only from a single source. As a result, answers from such environments are often limited to a simple set of questions. Alternatively, data may be exported to a spreadsheet, in which you have to manipulate data into a format that’s meaningful. As a result, getting information is neither easy nor fast.

Conversely, in a discovery analytics environment, data is seamlessly unified from a variety of data sources, without any manual intervention or use of spreadsheets. An effective discovery analytics solution[2] does this without requiring a data warehouse, thereby minimizing the need for an expensive and time-consuming Information Technology (IT) project.

2. Static vs. Dynamic

Another complaint we often hear with conventional reporting is that each report is static, which leads to never-ending requests for additional reports. Even in districts with hundreds of reports, there’s still a need for one additional column or one more way of slicing the data. The result is “data rooms,” with walls covered in paper reports where people wander around trying to draw conclusions or insights.

Because reporting solutions provide you with static tables and charts, if you want to ask additional questions, you have to create new reports. Having gathered a pile of reports, you then need to manually compare them side-by-side, carefully noting differences, along with additional questions that still require answers.

Discovery analytics are different because they are dynamic. While you may start with a dashboard, you decide where to dig deeper. If the answer to your first question generates more questions, you can immediately uncover the answers simply by clicking. You can drill down into further details or up for a broader view or across to additional connections. You don’t have to wait for someone in IT to write a query. You just ask the question and the system guides you automatically to discovering the answer.

3. Ease of Exploration & Discovery

Districts are often resource-poor. Not every district can afford a data warehouse and the IT staff needed to support it. Even if a district office has the resources to build a data warehouse, school administrators and teachers may lack an easy way to access the data they store.

In most reporting environments, data and information are concentrated in the hands of a few. That data is often locked in separate file formats and locations, requiring multiple complex steps to answer even a simple question. IT experts must extract the data and format it in a way they think makes sense, which can mean by the time the report gets to the front line where it’s being used to inform instruction, the opportunity to have a positive impact on student learning may have passed.

Further, the data is almost always presented in isolation, so while it may be helpful to have the scores, say, for your state summative test all in one report, it would be even more helpful to compare them against the benchmark assessments from earlier in the year or disaggregate the data by ethnicity, gender, or poverty indicators.

Discovery analytics changes that process. Data is passed directly into the system, where charts and graphs are created automatically. From here, it’s easy to explore the information and see patterns, often with a single click. Through this process of exploring data, deeper insights are gained that lead to knowledge of why things are happening.

A strong discovery analytics solution[3] connects the dots between multiple data sources. You’re no longer looking at data in isolation (e.g., state summative test scores for this year). You are comparing this year’s scores with last year’s and adding a layer that links them to formative testing throughout the year. You can then disaggregate results based on student demographics such as ethnicity or lunch program participation, and much more. Now you have real context for those scores—they tell a complete story you can use to help students succeed.

4. Speed & Independence

As educational institutions have become more data driven, stakeholder expectations on how data can impact instruction have risen. For example, we met with the assessment director for a large Florida district. It was the day before a school board meeting, and our meeting was continually interrupted by incoming requests for different analyses from the Superintendent and various Board members. With every request, the Assessment Director was spending 10 to 15 minutes answering a single question.

In a reporting environment, the story above is the norm, not the exception. Educators who need the information are unable to access it quickly and often need to pass their requests to an IT person with knowledge of the underlying database. The process is painful, lengthy, and inefficient.

Also, many tools don’t do a good job of protecting student privacy.[4] Due to the difficulty of creation, reports may not be customized to an appropriate level of access. There’s typically one report covering the entire data set, rather than tailored reports that expose only the information the recipient should rightfully see. It’s hard to create knowledge independently when you have to manually exclude data.

Discovery analytics solutions epitomize speed and independence. Data from disparate sources is integrated the moment it’s received or updated. Dashboards immediately update to display the most recent information. Anyone with access can instantly begin exploring all their data to uncover patterns and relationships that they then turn into knowledge and decisions.

Such solutions also protect student privacy with role-based access. Teachers are only able to explore connections among data for their classes and students. Principals can only examine data for their schools, and so on. No one has to do anything extra at the point of access—these permissions and protections are built into the system.

5. Linear vs. Associative Access

Traditional paths to knowledge are typically pulled from a data warehouse or painfully extracted by combining multiple spreadsheets. They’re usually IT-driven and by their nature complex and technical. They are data-centric because they require a very intimate knowledge of the data and the relationships between the data. They are slow to build and change, taking months or even years. This path is a huge problem for districts, which need to be nimble and responsive.

In a reporting environment, information is usually distributed in a linear way. The district is the keeper of the source data. A data expert at the district turns that data into information and sends it to the schools, which provide it to teachers, who provide it to students. Linear distribution is cumbersome and results in the information not being available rapidly. Also, as we discussed earlier, you get what you get. Your exploration options are limited to what’s in front of you, which may not answer the questions you want to ask.

By contrast, discovery analysis opens up a world of educator-driven associations. Because the data is available for individual exploration, you’re not limited to the questions the data experts asked—you can discover your own answers, see your own patterns, revealing complete stories about your students. These data stories guide you to decisions based on facts, not guesswork.

[1]      “Teachers Know Best: Making Data Work for Teachers and Students.” Impatient Optimists. Bill and Melinda Gates Foundation, 3 June 2015. Web. 20 July 2015. <http://www.impatientoptimists.org/Posts/2015/06/Teachers-Know-Best-Making-Data-Work-for-Teachers-and-Students#.Va1d8flVhBc>.

[2] Such as Scantron Analytics.

[3] Such as Scantron Analytics.

[4] “Teachers Know Best: Making Data Work for Teachers and Students.” Impatient Optimists. Bill and Melinda Gates Foundation, 3 June 2015. Web. 20 July 2015. <https://s3.amazonaws.com/edtech-production/reports/Gates-TeachersKnowBest-MakingDataWork.pdf>.