Research and Theory behind decisions…

So, it appears by looking at this week’s meeting agenda, the board had a vote (I’ve been unable to attend the last few meetings, unfortunately) about PBE and voted to continue on its path. Disappointing, due to lack of any research that supports its efficacy, but not surprising. And especially not surprising as tonight’s meeting is billed as a “workshop” — but that certainly appears as though it’s just for show. How can you have a “workshop” after you’ve already voted on your decision?

Below is an email I sent after the last “workshop” — which was primarily just a place for the same select teachers who spoke last April to share their views. I got no response from anyone.

I was handed a post-it and told that Hattie is who they use to guide their thoughts/processes about PBE. What I found about Hattie’s “work” is disheartening, but not surprising. Essentially, he admits that about half of his work in is first book has huge statistical flaws. (Discussion about this can be found here: http://visablelearning.blogspot.com/p/cle.html.) The book is listed on RSU5’s site as the support for their “feedback” model. And, as an aside, am I the only one in school who got feedback from teachers without someone needing to label it as new and innovative and part of PBE?

Regardless, my frustration is nothing new. I ask for unbiased, peer-reviewed research, and I’m met with shoddy approximations and told I just don’t “understand.”

Once again, I contend that I do understand that there’s no real research supporting PBE. I understand the premise of PBE. I just don’t AGREE that its methods have been proven in any way to be the best way forward. Its efficacy is in question. And the board relies on people selling things to tell them that something works. Maybe what we need more of in this district is scientific teaching and learning — the art of QUESTIONING what you’re presented with and only considering it as “fact” IF there’s evidence to support it. And shoddy evidence presented by people making money shouldn’t count. Writing a book doesn’t automatically make something valid, and anyone in education who continues to point to a book that has basic flaws in its statistical research, after it’s been pointed out to them, obviously doesn’t understand — and is also choosing not to understand. And it saddens me that our board seems to fall into this category.

—– Forwarded Message —–

From: Pam

To: gulkoj@rsu5.org

Cc: Becky Foley <foleyb@rsu5.org>; Board <board@rsu5.org>

Sent: Thursday, September 13, 2018, 12:42:10 AM EDT

Subject: quick note…

Jen –

I appreciate you taking the time to share one of the sources you and other teachers have felt has been valuable in directing the district in its revamp of teaching. I’ve only started to dig into Hattie’s work (I think I’ve spent maybe 30 minutes), but I’m already finding distressing information about his meta-analyses and the conclusions he draws based on his interesting approach to statistics.

I’ll be digging into it more to make sure I’m not just getting one side of this, but, if what these articles say is true, I have deep concerns about the validity of it, and therefore the conclusions he draws.

https://ollieorange2.wordpress.com/2014/08/25/people-who-think-probabilities-can-be-negative-shouldnt-write-books-on-statistics/  and https://academiccomputing.wordpress.com/2013/08/05/book-review-visible-learning/ — and there are others, discussing fundamental issues (flaws) with his approach to “statistics.”

While I appreciate the effort to try to find and share some valid research to support PBE methods, in my first brush with Hattie’s work and what appear to be valid criticisms of that work, I don’t have the same confidence others do that Hattie’s conclusions are valid, or something that I’d use to support the restructuring of an educational system.

Also, even if I were to trust his interpretation of the results of all the research, it appears that “feedback” comes in as the 32nd most important factor (and he’s said student feedback, not just teacher feedback is vitally important: https://visible-learning.org/glossary/#10_Feedback — so I don’t think feedback on a summative assessment that a student then uses in order to study more and retake that assessment, was necessarily what he was referring to), and “mastery learning” (PBE/PBL/competency based, etc.) comes in at 57.

That list can be found here: https://visible-learning.org/hattie-ranking-influences-effect-sizes-learning-achievement/

Unfortunately, from my quick check, I come back to the same concerns I had before: just because someone sells it, says it’s true, has it in print, etc… doesn’t make it true or valid, or, peer-reviewed, quality research. That’s not to say that there might not be some valid research that he looked at, even if his use of their results doesn’t yield anything useful. It appears he bungled (at least in terms of being statistically correct in his conclusions) his attempt to take widely disparate studies and data and show he was comparing apples to apples. Per the criticisms, this just wasn’t accomplished. And I think it’s a pretty common flaw and limitation when talking about meta-analyses and what they can reliably say. But I may be able to dig up some research from this (specifically, I’ll try to track down what he used to assess “mastery learning”) to see if I can find truly valid research and data on PBE and student outcomes.

So, this starts me on a path, but I think I’ll pass on accepting the assumptions made by Hattie of what works best to improve student achievement. It’s not to say all his conclusions are wrong. I just have no way to know, because the fundamental flaws in his methodology can only lead me to dismiss all of his conclusions, because they’re not drawn based on a valid analysis.

Best regards,

Pam

Please help get the word out to others: