In between teaching math and where I’m at today – in the final year of a PhD program in math education – I obtained a master’s degree…. in atmospheric science. Makes perfect sense right? I won’t bore you the details, but my reasoning was that I’ve always been interested in weather and climate and I wanted to try something different before I settled into a career wholly in education (which, once I graduated, ended up happening). Atmospheric science research, as you can probably imagine, is an entirely quantitative field. My research thesis involved running models of tropical cyclones under different aerosol conditions. The analysis involved visualizing and representing quantitative data.

**Some problems with quantitative educational research, written by a guy with a highly quantitative background**

So initially approached education research from a highly quantitative-oriented perspective: you try Intervention X on Population A and a non-intervention on Population B and see what happens. If there’s a meaningful positive effect, Intervention X is good and should be enrolled in the mathematics pedagogy canon.

The more time I’ve spent reading and discussing education research, the less enamored I am with quantitative research. For one, populations are not alike. I knew that intuitively going in – that Population A =/= Population B – but I think I underestimated just *how* different they can be. We’ve all had classes that for some reason “work” and “don’t work.” Teaching in a large comprehensive high school, my first period and fifth period ostensibly drew from the same general school population, but for some reasons, didn’t operate the same. Quantitative research tries to control for this by describing the demographic data of the student population, but demographic differences don’t describe what my first period and fifth period classes perform so differently. It’s a strange alchemy of student personalities, physiological differences, my own energy and biases, and countless other variables that have little to do with my direct instructional decisions. Similar differences hold true for the activators of the interventions (i.e. teachers). Teachers’ personalities and teaching approaches are vastly different that there’s no real way to demonstrate that Intervention X is applied the same across classrooms.

One final general critique of quantitative educational research is that the intervention conditions and its measures can be meaningless. This is an example from a research paper that offered various interventions. The “intervention” in this case was to simply not teach students how to do something.

The conclusion of the paper is that – and this is going to shock you, I know – that teaching students results in higher exam scores than not teaching students. It will probably be cited by educators with an axe to grind against constructivist approaches. What the paper demonstrates is that, yes, if you don’t tell kids anything, they perform less well on a standardized examination that occurs shortly after the instruction or non-instruction. I’m providing this example not to take a side in the meaningless and endless direct instruction vs. inquiry holy war, but to show how methods and methodology can be murky, particularly when it comes to pedagogy. To be sure, there are practitioners of inquiry-based learning who probably do that: don’t instruct kids, but rather just tell them to find another way. So yeah, don’t do that.

It’s hard to glean broadly applicable “best practices” from *any* educational research. Students, teachers, and classrooms are highly individualized and complex systems. That’s why one of the tacit undergirding principles of *Necessary Conditions* isn’t “teach this way,” it’s to provide a framework under which countless strategies might apply.

The further you zoom out, increasing the “n”, you are diluting or muddying the intervention itself. There are certainly “better practices than others” out there, but even those might not be appropriate for all students. Be skeptical of any paper that pits one strategy against another (editor’s note: this isn’t really a criticism of quantitative research, but of educational research in general).

**Becoming a Qual Guy**

Qualitative research allows for rich, thick descriptions of students, teachers, classes, and interventions. A practitioner may read a qualitative study and determine for themselves if it’s applicable or something they’d like to try out. Or it describes a phenomenon with such detail, it illuminates understanding more so than a qualitative study.

One of my favorite qualitative papers is by former NCTM President Dr. Robert Q. Berry III. The paper is titled “Access to Upper-Level Mathematics: The Stories of Successful African American Middle School Boys”; it appeared in the Journal for Research in Mathematics Education (JRME) in 2008. The paper looks at the experience of eight African-American boys who experienced success in math class. He zooms in on the experience of them and zooms in even further on the experiences of three of them. Dr. Berry yields the floor to the students for much of the paper.

He also gives the reader enough context for the experiences of these students without overwhelming.

I also enjoy the table Dr. Berry provides, which allows some high-level understanding of the experience of the students.

The commonalities and differences of the boys’ experiences are buttressed by the voices of the students themselves. For example, Berry explores the meaning of Darren’s lack of parental academic advocacy with further investigation.

Berry’s paper helped me get unstuck in my research. It showcased how I could continue to have the student experience front and center, while still employing rigorous research methodology. The discussion section of the paper offers actionable items that educators may wish to employ. And because we have a rich understanding of the context and the individuals who participated, we can be discerning. Berry (2008) does an excellent job at engendering trust in the reader by discussing the study’s methods, Berry’s own positionality, organization, and deep dives into a few of the students who participated.

I don’t mean to pit quantitative versus qualitative research against each other. They both have many merits and demerits, particularly when it comes to education. One could easily become dismayed at qualitative methods after reading a few unconvincing qualitative studies. This post is more a chronicling of my foray into qualitative research, despite once being convinced that is fuzzy.

The lesson I continue to learn is that all education pedagogy research is fuzzy to *some* degree. Unlike in atmospheric science, we can’t tinker with our model with slightly different variables. Because as complicated and dynamic as mesoscale physical weather models are, classrooms are infinitely more complex.

**Reference**

Berry, R. Q. (2008). Access to Upper-Level Mathematics: The Stories of Successful African American Middle School Boys. *Journal for Research in Mathematics Education*, *39*(5), 464–488.