Tuesday, March 27, 2012

Big and Little

This material is reposted from my other blog. But I think it's worthy of sharing here, too. I like the idea of visuals which provide educators with both macro and micro levels of information. After all, school is not about making widgets. It's about learning with human beings. What are other ways we can "humanize" the data analysis process?

In the world of grading practices, there is standards-based...and hodgepodge. Hodgepodge grading (a term found throughout the research literature) refers to a score or final grade that represents more than learning. In other words, a teacher who assigns a grade that includes student "effort," whether or not the assignment was completed on time, neatness, or other factors in addition to what the student learned about the topic/subject/standard, is a hodgepodge grader. General consensus from the research is that this is not such a hot thing to do. Factors should be reported separately.

And then, there is this article which actually makes chicken salad out of these chicken sh..., er hodgepodge, grades. Here's the abstract:
Historically, teacher-assigned grades have been seen as unreliable subjective measures of academic knowledge, since grades and standardized tests have traditionally correlated at about the 0.5 to 0.6 level, and thus explain about 25–35% of each other. However, emerging literature indicates that grades may be a multidimensional assessment of both student academic knowledge and a student's ability to negotiate the social processes of schooling, such as behavior, participation, and effort. This study analyzed the high school transcript component of the Education Longitudinal Study of 2002 (ELS:2002) using multidimensional scaling (MDS) to describe the relationships between core subject grades, non-core subject grades, and standardized test scores in mathematics and reading. The results indicate that when accounting for the academic knowledge component assessed through standardized tests, teacher-assigned grades may be a useful assessment of a student's ability at the non-cognitive aspects of school. Implications for practice, research, and policy are discussed.

What's this all mean? According to the article, "25% of the variance in grades is attributable to assessing academic knowledge...and the other 75% of teacher-assigned grades appear to assess a student's ability to negotiate the social processes of school." Hodgepodge, indeed. However, "while administrators have indicated that they privilege standardized test scores over other forms of data (Guskey, 2007), little criterion validity has been shown for test scores as they relate to overall student school or life outcomes (Rumberger & Palardy, 2005), whereas teacher-assigned grades have a long history of predicting overall student outcomes, such as graduating or dropping out (Bowers, 2010)." So, if hodgepodge grades are better predictors for whether or not a student will finish school, why not find a way to use those to identify at-risk students?

And so the author (Alex Bowers) of the article did. And this, my friends, is one of the graphics:

Hierarchical Cluster Analysis by Alex J. Bowers from http://www.pareonline.net/pdf/v15n7.pdf


I won't get into the nitty-gritty here---you can read the article for yourself, if you like. But basically, what you're looking at here is every student's grades for their K-12 experience, for an entire district. The students are sorted/clustered according to the patterns their grades make. At the top, the kids who are mostly red in the heatmap are those who scored well throughout their academic career...at the bottom, the ones who struggled. The funky brackets on the left are used to cluster students with similar performance, the width and length of the brackets showing degrees of similarity. On the right, the black boxes include data which is not grade dependent, but was considered worthy of consideration.

Now, what the researcher found out from doing this sort of analysis isn't groundbreaking: Kids who struggle in school (gradewise) are more likely to drop out. And even though I can't condone hodgepodge grading, what is important here is that this is the first attempt I've ever seen that gets away from hodgepodge analysis.

I think every piece of research I've seen (up until now) does its best to mash the data into neatly digestible bites---just like we tend to do with student grades. Educational research doesn't represent individuals as individuals, but as populations. We seek to generalize, because we feel we have to, given the amount of data we collect. But I can't help but wonder what we'd see if we looked at all of the educational data we collect at both the micro and macro levels: the trees and the forest, like the graphic above. We are slowly taking steps to move hodgepodge away from the classroom performance level. Will we---Can we---see it disappear from the research, too?

No comments:

Post a Comment