Every dataset has a story. We usually look only at the data and ignore the story. For example, according to my original findings, and as approved by committee of esteemed researchers in education and science, I could make this statement:
Pre-service elementary teachers showed a statistically significant gain in their learning about the moon and teaching elementary students about the moon by inquiry.
And this supporting statement:
The study shows that pre-service teachers average gain scores from pre-test to post-test increased by 7 points on a 21-item test.
If this were taken as the only finding from my dissertation, these pre-service teachers obviously demonstrated significant learning. All is well.
Another look, from a perspective otherwise easily missed, is this finding:
While there was a significant increase in constructed knowledge among the experimental group, the mean posttest score of 11.17 out of 21 possible is only 53%.
And then there’s this:
At the individual student level, these data are of even more interest….. The experimental group, however, had 15 out of 24 pre-service teachers (or 63%) who ended the semester with more misconceptions than they had at the beginning.
At this point, the story needs a disclaimer. I did not teach this class. The study took place in 2002-2004. The instructor of record is no longer teaching at this university. The course has been taught in a very different, and, it is hoped, more successful manner for several years now.
This is human subject research, folks, with many uncontrolled variables and many stories not easily measured with numbers. Using data from standardized student tests alone is very, very dangerous. I’ll stand unwaveringly on that position until this whole NCLB/RTTT/evaluation madness goes to the grave.