Assessing Comprehension in the Real World

Our world has transitioned from a pen-and-paper to a multimedia experience. Instead of getting our information strictly from encyclopedias and books catalogued on note cards in the public library, we now look to the internet, social media, texts, videos, and online courses for content. What this means for reading is that we do not just need to learn and become fluent in one form of reading presentation, but we must now become proficient across multiple platforms.

According to John Sabatini and Tenaha O’Reilly, though, testing children’s reading ability has remained a primarily linear experience. We present them with a paragraph or two to read and then have them answer multiple-choice questions on a printed page. That may have worked in the past, but it is definitely not the way people read, learn, and process information in this day and age.[1]

These researchers lead a team from the ETS (Educational Testing Service), and they believe there’s a better way to determine if a student possesses the skills necessary to comprehend what they read. It works no matter the format.

“In the real world, when you go to buy a cell phone … you have a goal,” O’Reilly said, and that goal serves as motivation. You want to learn as much as you can about all the different options so you don’t make a wrong choice. “You go on the Internet to compare prices and features, read reviews, and then make a decision,” he added. “That is a different thing than going to a test and answering detailed questions.”

The ETS team is one of six teams funded by the federal Reading for Understanding (RfU) research initiative. This initiative is designed to provide effective strategies for improving reading comprehension for students in grades pre-K–12. It was also created to bring reading assessment and intervention into the real world of the 21st century. Some of the RfU teams are looking at how students learn to read, others are looking at ways to support struggling readers. The ETS research team has spent over five years measuring how well the students in 28 different states read and understand information presented to them in their everyday lives, and they have administered over 100,000 pilot tests to assess performance. Their conclusion is that assessments can provide an opportunity for learning and discovery, not just appear as a numbered or lettered score on a piece of paper.[1]

The team’s assessments require the coordination of different pieces of information to determine an appropriate answer; the answer is usually written as a summary instead of marked as a filled-in bubble on a multiple-choice answer sheet. During some of the trials, students were allowed to search a simulated internet within the testing environment; they then had to determine which information they found was relevant to the answer and which was not, and they had to come up with a plan to structure their written answer. The summary they provided was adequate to demonstrate whether they had read and learned the material.

“We are trying to get assessments to be learning experiences,” O’Reilly said. “We want the test to be worthwhile.”

The team also feels that the best learning happens when students are allowed to make mistakes. They are given the opportunity to learn from these mistakes and correct them. As an example, students may choose an answer as correct but then be presented with additional information. They are asked, given this new material, whether they would like to revise their answer or stay with their original choice. Their ability to make an informed decision demonstrates how well they have understood both the material and the question being asked.

Students who struggle with writing a concise and accurate summary may be paired with a virtual peer — someone who can help to guide them during the assessment. Students write their own summary then look at the virtual student’s work. The virtual peer shows the students what needs to be done by presenting an example of how a correct, completed answer should look. This technique is of value, according to Sabatini, because “In the real world you collaborate and work with people. We’ve added simulated students who come in at different points to help you out. This reduces stress and makes the testing experience more social.”

“Everyone makes errors,” O’Reilly added. “But how we recover from those errors really matters.”

While the AceReader program does not allow for written answer responses at this time, it does test students’ ability to infer correct answers by piecing together different facts or statements, and to determine main ideas and patterns of organization through its General (Inference) Test Set. Students who work through this Test Set are expected to go beyond simple rote memorization to produce answers that are based on multiple pieces of information in the text and to draw conclusions from them. Teachers can choose this Test Set as the default for their students if they wish to use this alternative form of assessment. In addition, teachers can upload research text into the Read Mode to promote the type of assessment scenario that the RfU researchers used during their studies.

 

Citation:

[1] Bhombal, Manaal. (November 8, 2016.) Texts, Emails, Blogs: Assessing reading comprehension in the real world. Retrieved from https://www.pagalguy.com/news/texts-emails-blogs-assessing-reading-comprehension-in-the-real-world-4871396211556352

Author: AceReader Blogger

The AceReader blogging team is made up of specialists in a number of different areas: literacy, general education, content development, and educational software. For questions about posts, please submit them in the form below. For suggestions about blog topics, please email them to blogger@acereader.com.

Leave a Reply