Far too many students end up in high school with poor writing skills, setting them up to struggle in college or the workforce.
Milena Keller-Margulis, associate professor in the University of Houston College of Education’s school psychology program, is working to change that. She has received a $1.39 million grant from the federal Institute of Education Sciences to develop a quicker and more reliable way to assess younger students’ writing abilities. College of Education Professor Jorge Gonzalez is co-investigator on the grant, and Sterett Mercer, an associate professor at the University of British Columbia – Vancouver, is co-principal investigator.
Their project involves studying whether computer-based scoring of short writing samples is effective – which would allow teachers to identify more easily which elementary school students are struggling with basic writing skills and need extra help.
“We are going to make some big steps toward being able to screen students in the area of writing, which we cannot do well right now,” said Keller-Margulis. “This will allow us to ultimately change the trajectory of students who are struggling in writing. The earlier and more accurately we can identify them, the more time we have to change their performance.”
The research comes amid ongoing criticism of lengthy test preparation happening in some schools across the country, with students spending hours on practice exams. Instead, Keller-Margulis focuses on educators using a short but effective approach to check students’ basic academic skills and then track their progress. The method is called curriculum-based measurement, or CBM. While the approach does not yield a comprehensive analysis of a student’s skills, it’s comparable in value to taking a child’s temperature to find out if he or she is sick.
“We don’t need to do a shut-down day and give kids the whole state test,” Keller-Margulis said.
Research suggests, for example, that the curriculum-based measurement for reading – counting the number of words a student reads correctly in a grade-level appropriate passage in one minute – can predict whether the student will pass the state test with decent accuracy.
“It’s a screener. It’s not going to be 100% perfect,” Keller-Margulis said. “But the idea is that you’re not wasting a lot of instructional time with testing.”
For the grant-funded project, Keller-Margulis has teamed with Gonzalez and Mercer to develop an improved version of the standard curriculum-based measurement approach for writing for elementary school students. The typical method involves having students respond to a prompt, thinking for one minute and writing for three minutes. However, Keller-Margulis has found that writing for longer – seven minutes or perhaps 15 – may be more effective.
In part, the research team plans to study the effectiveness of using automated scoring of students’ writing samples through a free or open-source program or a commercial program versus hand scoring. They also will look at whether the genre of the writing – narrative, informational or persuasive – has an impact. The researchers are not suggesting that automated scoring is better than human scoring. Instead, they will evaluate whether an automated process gives teachers enough information to accurately screen students’ skills, thus prompting extra help from the teacher.
“When you do curriculum-based measurement, it’s like taking someone’s pulse, or tracking where is a child in this particular domain such as writing,” said Gonzalez, whose research focuses on early language and literacy. “And by knowing where a child is, it allows teachers to differentiate instruction when necessary for children who may need more supports. To do it well and timely is critical, especially for young children.”
-Article written by Ericka Mellon, director of Communications, College of Education