#87 Jennie Dougherty: When Practice Leads the Research

Jennie Dougherty #100DistrictConversations

This entry in #100DistrictConversations with Jennie Dougherty, AI Lead and Consultant at KIPP Public Schools Northern California, was something special. Because we had the chance to sit down, in-person at SXSW EDU 2026, we got to take our time and really explore all of Jennie's moves, intentions, and learnings.

What emerged is a powerful article that anyone working to pilot AI tools in schools should read...multiple times.

I met Jennie at our first Highlander Institute Blended and Personalized Learning Conference more than 10 years ago. This gathering was hosted in the cafeteria of Highlander Charter School, with approximately 40 attendees. We had some really great speakers like Jin-Soo Huh & Eric Westendorf - among many others that my old man brain is forgetting.

Jennie was our keynote speaker that year. Her Beta Classroom blog, which she still keeps up with all these years later, was a breath of fresh air in the independent edtech reporting space at the time. Even today, her writing is so easy to read, so direct, and so educational when it comes to implementing edtech in actual schools and classrooms.

If you haven't yet had a chance to read Jennie's work or sit and talk with her about how she's approaching the piloting and continuous improvement processes necessary for strong classroom implementation, then let this article be your starting point. Like me, you'll become a fan for life.

Thank you Jennie for sitting down with me for conversation #87. I hope we get to do it again real soon!

#100DistrictConversations with Jennie Dougherty

There are people in this field who think carefully, act deliberately, and share what they're learning with generosity. Jennie Dougherty is one of those people. When I reached out to her as part of my #100DistrictConversations initiative, I knew the conversation would sharpen my thinking.

At Throughline Learning, one of the things we’re holding onto more than ever is a key lesson from the blended learning era. From 2011-2019, we got overly excited about early outputs without understanding there was a missing evidence base for student and teacher outcomes. Approaching AI in the very same way would be harmful.

However, there are so many well-researched, effective pedagogical strategies that the field has championed for decades. If we can support more teachers to do those things more often, that's a leading indicator of something real. The question we keep asking is whether AI can actually help close the gap between professional learning and what teachers can consistently implement in their classrooms.

At KIPP Public Schools Northern California, Jennie is using her role to rigorously answer this implementation question. She's not waiting for the research to catch up. She's building the systems, running the analysis, protecting student data, and sharing what she's finding honestly, including the parts that didn't go as planned.

What follows is a summary of Jennie’s insights from our wide-ranging conversation.

Any Tool is Only as Good as the Instructional Foundation Beneath It

I've been sitting with a question lately that I think more of us in education need to take seriously: What does it actually mean to implement AI well? Not in theory, but in practice; in classrooms, with real students and real teachers.

Over the past several months, our team at KIPP Northern California has been doing that work with intention and high expectations for what it means to be future-ready. We started our work with Coursemojo, an AI-powered instructional tool, and the first thing we had to reckon with was, if a teacher doesn't have a solid foundation in core instructional practices, the tool will be a distraction, absorbing all of their attention while those other things fall off the plate.

We’re using Coursemojo at a very specific moment in a lesson. If a teacher doesn’t get to that point in the curriculum from a timing standpoint, the tool never gets used at all. So before we even thought about rollout, we developed clear criteria for teacher readiness, including when to pause implementation. There are signals that tell you a teacher isn't ready. This is not as a judgment, but as a diagnostic. The shiny new thing will always compete for attention. Our job was to make sure the foundation was there first.

This is a lesson I think the field keeps having to relearn: technology doesn't fix instructional gaps. It amplifies what's already there, for better or for worse.

Healthy Habits First: Our Approach to Student AI Literacy

Our first step for student AI usage was building awareness and healthy habits. We wanted to make sure we did not repeat the same pattern we fell into with social media, where we essentially practiced abstinence. We said no, and then lost all opportunity to shape how students engaged with it. We cannot do that again with AI.

We created an advisory committee to address student mental health and the specific risks of AI companions, drawing on resources from The Rithm Project. We have an incredible regional leader, Amy Tran, who leveraged their full Sparks toolkit and designed a pre- and post-survey. She used the toolkit resources to design turn-key lesson plans that teachers could use to deliver the lessons during advisory periods. Students felt significantly more informed afterward, with tools and frameworks to help navigate AI on their own.

The vast majority of teachers, however, did not feel comfortable leading this work themselves. A critical learning for us was that teachers needed to experience these activities first before they could lead them with students. That type of modeling is a professional principle that applies here as much as anywhere.

Phased Implementation is a Non-Negotiable

Effective technology rollout is built on phases, and the early phases are about patience more than performance. When teachers first encounter a new platform, friction is real and expected. Eight-minute login times are not failures; they are design realities that require planning. Phase 1 is about the basics: logistics, mechanics, access, familiarity. Nothing more.

The most important thing a leader can do during the rollout period is be present in classrooms alongside coaches, watching teachers work at the ground level and honoring that they are the experts when it comes to their subject area and their students.

The real power of the Coursemojo tool emerged in Phase 3, when teachers began making deliberate instructional decisions about which students would benefit most from independent use and which needed something different. Rather than treating the technology as a universal solution, teachers used the dashboard to identify a small group of 5-7 students and bring them to the table for targeted support.

What happened at that table reframed the entire feedback loop. Teachers could surface misconceptions in real time, mid-activity, and use them as teaching moments before sending students back to finish independently. That shift, from delayed correction to just-in-time instruction, is the core of what made the model work.

Honest Training Over Hype Sessions

We told our Catalyst teachers, "you have free rein here. You are the experts. We want you to figure this out. Here are the guidelines the company gave us, but we already know that these are insufficient. We need you to identify and address where the product and its recommended practices fall short before we scale those shortcomings to anyone else."

The training that summer was very much the opposite of what a blended learning training felt like back in the day. Back then it felt like a hype session with a lot of excitement and no honest reckoning. This time, we said, ‘Here is where the tool’s instructions are inadequate and you are going to have to reach beyond them.’ There was no pushback from the Catalyst teachers at all. They totally agreed with our new framing around exploration. For them, that openness and ambiguity felt like it was honoring their own experiences with blended learning resources.

Running Short-Cycle Analysis, with Privacy Built In

Our biggest question was whether this was helping us overcome what I call the 5% Problem. Are more than 5% of students actually benefiting from implementation? What we found is that implementation quality matters more than the tool itself.

To run the analysis, we used Claude Code to build an application that lived entirely on our own devices. The identifiable data never went to Claude. From there, we could run short-cycle analysis and comparison studies using DIBELS assessment data and Coursemojo usage data over four months. We chose DIBELS purposefully. Many of our students are significantly behind grade level, and looking at leveled curricular assessment data alone wasn't going to give us the granularity we needed to see real growth. DIBELS let us track whether students were improving in comprehension, fluency, and vocabulary skills in ways that mattered. This spring, we'll be extending the analysis to interim assessments and curricular assessments.

The analysis confirmed something we did not know going in: Phase 3 is where the greatest impact is happening. Students who were well below grade level were benefiting in that small-group Phase 3 setting and were not benefiting anywhere else. That finding came from the data analysis, not from our intuition.

On the privacy side, we were deliberate. We used Claude Code to create applications that allowed us to run statistically significant impact analysis on our devices. Data wasn’t shared with Claude but Claude code did help us to create the applications we needed to effectively analyze our data (e.g. executing the matched-sample methodology, nearest-neighbor matching on five variables: prior SBAC scores, Q1 attendance, MLL status, IEP status, and socioeconomic status).

Even if we could share data with enterprise-level AI tools under current agreements, I would still want to keep analysis localized on our own devices and not shared. This is probably best practice at this moment, and I would recommend it to any district thinking about running this kind of analysis.

The Emotional Reality of Real-Time Feedback

There are two main reasons students struggle with Coursemojo. The first is that they may struggle to read or effectively respond to the feedback that Coursemojo gives. There were many features built into the tool that help, but there are students who still struggle even after these scaffolds are leveraged. That’s why we ensure that our teachers are always working with at least a small group of students rather than assuming that the software will directly address their needs.

The second challenge was one we did not anticipate. When I first heard that students would get feedback in real time and know their score before the lesson ended, I was excited. But our students were overwhelmed by it. They were shutting down.

The tool has ten dots at the top of the dashboard. If a student gets a question wrong and keeps working, those dots change. Instantaneous feedback provided by AI is processed by humans at the speed of emotion. For a student already struggling, that feeling of failure in real time becomes a challenge you have to actively address.

This is not necessarily a design flaw, but it meant we had to actively build or call upon resiliency practices before we introduced the tool. Teachers added specific supports to help students build the persistence and emotional bandwidth to handle real-time feedback. You cannot just hand students the tool. You have to build that capacity alongside it.

Tools Worth Knowing

Using the Playlab platform to create local, custom apps gives us a better wireframe and workflow of what we actually need before we bring in contractors to build something more robust. Sometimes the Playlab version is good enough on its own! Playlab allows us to deconstruct apps and show very clearly how different large language models (LLMs) and the variability in reference resources you upload can alter what you get as an output. That becomes a baseline way to build AI literacy itself. Teachers can see the back end, understand the components that come together, and start to develop their own intuition for how these tools work.

Enlighten.AI has been another meaningful micropilot that we are conducting to support our college counselors in providing feedback on students’ personal essays. Unlike tools that require each educator to train the model individually, Enlighten can use a common rubric built from multiple samples, which gets you consistency rather than a patchwork of individually trained tools. The ability to deliver fast, consistent feedback at scale matters when you are trying to build student writing capacity across a whole school or system.

The Roles the Field Still Needs to Build

Districts navigating this moment need new kinds of expertise, plus new models for accessing it. Two roles in particular stand out to me.

The first is what I'd call a Privacy Checker: not someone who validates whether a tool passed an external compliance checklist, but someone who can actually test the work from the inside. There is a real difference between validating that a tool passes a compliance review and actually testing whether the output could be replicated or misused by someone approaching it with bad intent. That kind of internal auditing doesn't really exist yet in most districts.

The second staffing idea I’m watching is having developers embedded on your team part-time, people who understand both what can be built and what needs to be purchased, and who can build the patches in between. That seems like a direction that will matter for individual schools and regional networks, not just large national organizations.

Let Practice Lead the Research

For a long time, we have been begging education researchers to study our practice, but in ways that often aren't actually helpful for the work. Maybe we can flip that relationship and let practice lead the research for once. The tools are there to help us do that. The question is whether we have the discipline to use them well.

We need to figure out what future-ready means right now, seriously and rigorously. I think the answer starts with building the instructional foundation first, being honest about what tools can and cannot do, protecting student data with the same care we'd want for our own, and trusting expert educators.

That's the work. And I'm grateful to be in it alongside so many people who are taking it seriously.

Read on LinkedIn

This post is part of the #100DistrictConversations series, a project hosted by Shawn Rubin, Executive Director of Throughline Learning, to capture practitioner wisdom from district leaders navigating professional learning and system-wide improvement. If you'd like to be part of the series or nominate someone else, reach out!