top of page

We Blamed Google. Now We're Blaming AI. We Need to Stop. A Case for Intentional Instructional Design in the Age of AI

  • Writer: Dustin Rimmey
    Dustin Rimmey
  • 18 hours ago
  • 12 min read

I was a senior in high school in the fall of 2001. The internet's influence was growing. Google was becoming more mainstream. And the adults in charge of education were absolutely losing their minds about it.


Which one of those kids is me, Senior Year, 2002?
Which one of those kids is me, Senior Year, 2002?

The concern, stated with full sincerity by serious people in serious publications, was that students would simply Google everything, copy and paste their way through school, and render the entire enterprise of education pointless. Why learn anything when you could just look it up?


Sound familiar?


Here's what actually happened. We didn't stop needing teachers. We stopped needing teachers who only knew how to deliver information, and started desperately needing teachers who could help us figure out what information was worth trusting, how to curate it, how to think critically about what we found, and how to turn a search result into an actual idea.


The internet didn't make school pointless. It made bad teaching more visible.


We learned that lesson. Or so I thought.


Twenty-something years later, AI showed up. And we forgot everything. Again.


Here's the part that really gets me. We didn't just forget the lesson. We're making the exact same mistake in the exact same way. Something new arrived, it exposed the cracks in how we'd been doing things, and instead of looking at the cracks, we blamed the thing that showed them to us.


We did it with Google. We are doing it with AI. And three things I saw on my feed this week convinced me it's time to say that out loud.


The Expectation Has Already Moved


Dan Fitzpatrick, who writes and speaks as The AI Educator, posted something on LinkedIn this week that stopped my scroll. He cited a 2024 Microsoft study showing that over 60% of employers now expect graduates to be proficient in AI and able to help others use it. His point was sharp and uncomfortable: AI is no longer a competitive advantage for young people entering the workforce. It is a baseline expectation.


We are expecting AI-ready graduates from systems that are not yet AI-ready.


That tension is real, and it is urgent. But here is what I want to add to Dan's argument: the reason our systems are not AI-ready is not because AI arrived too fast. It is because we never finished the work we started when Google arrived. The real message from that Microsoft study is that while we believe that we are focused on making our students "college and career ready," or learning "21st century skills," we may have no idea what these skills look like beyond academia.


The root of the issue that we've failed to acknowledge is that we've never fully made the shift from information-delivery to information-literacy. We patched the hole instead of fixing the roof. And now AI has ripped the patch off entirely, and we are standing in the rain, wondering how this happened.


The goalposts didn't just move. We never actually reached the last ones.


Let me explain.


AI Didn't Break School. It Exposed It.


My fellow member of the Teach Better Family, Dan Thomas, an educational consultant and retired technology teacher, put it more bluntly this week: if a student can complete your assignment without thinking, it is not an AI problem. It is a design problem.

AI didn't break school. It exposed it.


This is the Google lesson, restated for 2025. When students could suddenly copy and paste from the internet, the assignments that fell apart were the ones that were already fragile. The ones that were really just asking students to locate and reproduce information rather than actually think about it. AI has done the same thing, only faster and more completely. If your assignment can be finished in thirty seconds by a chatbot, the chatbot did not ruin your assignment. Your assignment was already asking the wrong question.


The difference this time is the speed. Google exposed weak assignments slowly enough that we could look away. AI is doing it so fast and so completely that we cannot pretend anymore. And so instead of redesigning the assignments, a lot of educators and policymakers are doing what feels most natural when something uncomfortable is exposed.


They are blaming the thing that exposed it.


This is hypocrisy.


Last August, the New York Times highlighted both the increased usage of AI by teachers and educational support staff AND an increase in concerns over student usage of AI. That's the equivalent to telling me that I can't Google the answer to a question, but when I ask you one, you actively Google the answer in front of me.


We need to actively rethink how we assign things to students. Not to prevent all AI usage, but to guide how AI can/not be used in your classroom. Previously, I wrote about the concept of the unessay, which is an example of how something can become AI-resistant. You allow students to use AI as a tool: a thought partner, an editor, a consultant...but students still have to do the heavy lifting on the performative task. Alkout and Khalif assert, in a 2024 article in Frontiers in Education, that the final product is the easiest thing to fake with AI. The intellectual journey is nearly impossible.

If you only grade the final product of the unessay, you leave the door wide open for AI generation and nothing else, because an AI can create a digital zine, podcast, or collage in a matter of seconds. Instead of making the final product being 90% or 100% of the grade, make it 40%. The other 60% of the graded value comes from a documented journey.

In my follow-up post on the unessay, I outline several ways for documenting the process a student goes through: a proposal and how they make changes to it, an annotated bibliography, or a reflective-process paper or journal. Because, while AI can craft a final product, it cannot yet craft a messy, iterative description of a learning process. If an unessay sounds like too big a leap, check out these amazing suggestions from either Packback or Faculty Focus on creating AI-resistant (or AI-resilient) assignments.


The assignments are, and have always been, the problem. Instead of blaming AI, it is time for us to admit it.


Let's Talk About What We Actually Did Wrong


An Instagram post from give.spark that also popped onto my feed made a point I largely agree with: we didn't lose students to devices. Devices exposed the system we built. Kids sitting still, eyes forward, quiet...we called it learning. We lied to both our students and ourselves. It was compliance cosplaying as learning.


But here is where I want to complicate the picture, because I think the conversation stops too early when we reduce the discussion to the black and white of "technology good" vs. "technology bad."


Here is something I will also say out loud: analog instruction has real value. A thoughtful paper-based activity, a Socratic seminar with no screens in sight, a sketchnote exercise, a physical simulation; these are not lesser forms of learning because they lack a device. In fact, research consistently shows that certain tasks, particularly those involving deep reading, retention, and reflection, benefit from analog approaches.


Researchers at the University of Tokyo have published a study highlighting a higher percentage of information retained when writing on paper vs. digitally (I always tell my students to handwrite notes, and then write summaries digitally as a "study guide"). We've all heard Mueller and Oppenheimer (2014) cited at us to death, arguing that students who took paper notes had 34% higher accuracy in recalling information a week later. I know these citations are in flux because the research suggests there is more nuance to this debate than we typically give it. (You should read the new meta-analysis in Scientific Reports).


The problem was never paper. The problem was never the device. The problem was always whether what we asked students to do with either one required them to actually think.


We need to return to exploring instructional design with intentionality. That starts with two frameworks that have been sitting in our professional development binders, largely ignored, for years.


The first is Universal Design for Learning. UDL asks us to design learning experiences that offer multiple means of representation, action, and engagement, not as an accommodation for a few students, but as the default for all of them. Here is what I want you to notice: a classroom built on genuine UDL principles is almost automatically more AI-resistant than a traditional one. When students have real choices about how they demonstrate understanding, when the task requires personal voice, genuine decision-making, and a documented process, AI has nowhere to hide. The reason so many of our assignments are vulnerable to AI completion is not that AI is too powerful. It is because our assignments were never designed with enough flexibility, authenticity, or student agency to begin with. UDL fixes that problem at the root.


The second is SAMR. And this is where I need to get honest about my own practice.


I remember the first time I showed students a Crash Course video. Before widespread device access, they had answered comprehension questions on a worksheet. After the proliferation of devices had happened, they answered the EXACT SAME comprehension questions in a Google Doc. The medium changed. The task did not. I called it technology integration. IT WAS NOT. It was a worksheet wearing a different costume, and I had convinced myself it counted as innovation because a screen was involved.


We can't use SAMR as a simple "should we use a device" checklist anymore. Especially with AI in the mix. We need to think more carefully about when and why any tool earns its place in the lesson.


When reflecting on my Crappy Crash Course lessons, I was on the lowest rung of the ladder. That was a hard S--Substitution. I replaced a traditional tool with no functional change. In his doctoral dissertation, Carlos Jenkins highlights the uncomfortable truth that a massive proportion of what gets called technology integration in schools still lives permanently on that bottom rung. We took our crappy worksheets and made them crappy Google Docs. We took our crappy lectures and made them crappy slide decks. We took our crappy webquests and called them research projects.


The device did not make the instruction bad. It just made bad instruction more expensive.


This is why the discussion of using technology in any form in the classroom cannot be a reductionist defense of the wide world of the internet or overly clinging to device bans and screen time regulations. We need to carefully navigate the grey zone of instructional design (where there are way more than 50 shades).


The digital vs. analog instruction question is the modern goldilocks problem (what is our just right?). I'm very intrigued by the notion of edtech minimalism, an idea coined by Paul Emerich France as schools were grappling with pandemic-era tech overload. France's core argument is simple. We need to consider the scale, efficiency, and effectiveness of each tool before we introduce it into the classroom. As we continue to reorient physical learning with our learners who spent the bulk of two years online, revisiting what France suggests has validity. We shouldn't be extensively pro-analog or pro-technology. We should be pro-intentionality.


The debate over screen time adds a new hiccup. In February, the American Academy of Pediatrics complicated many of the arguments in favor of exclusive screen time limitations when they released their new policy statement. The AAP found that the problem with screens and/or screen time is not a number. It's not how long a child uses a screen that creates the problem; it is how that screen is being used. They argue that we need to examine the digital environments being constructed for our children (students). Are the digital biomes we craft in our classroom meeting the developmental needs of our children, or are our ecosystems focusing on maximizing engagement and the commercial value children offer?


David Cutler had a piece published in Edutopia this week that I initially thought I would hate. The piece, Why and How I’m Limiting Screen Time in My Classroom, made my inner edtech evangelist cry from the title alone. However, I forgot that much like books and covers, I shouldn't get turned away from reading something based on the title alone. You should definitely read it. While I may disagree with when he decides the optimal time to use technology is, we agree more than I thought we would. I may disagree with him, but I don't know what his students need at this moment in time. What I respect about Cutler's approach is his focus on intentional use. This is the ideal approach all educators need. Intentionality.


This is how we resolve education's modern Goldilocks problem. This is how we find our "just right." We shouldn't be focused on too much tech or too little. Our "just right" is asking if we are using the technology with enough intentionality to serve the learners sitting in front of us. The reason we keep getting it wrong is not because the tools are bad. It's because we keep reaching for them before we answer the most basic question of all. What do I want my students to think about today?


Banning devices does not magically restore curiosity or critical thinking (see last Friday's post on curiosity). But handing every student a Chromebook without a clear pedagogical purpose does not restore it either. Both are decisions made without asking the most important question first: what do I actually want students to think about, and does this tool help them think about it more deeply? We've had nearly a quarter-century post-Google to get this right, and we've failed.


Spectacularly.


We need to return to the ground zero of instructional design. Asking what we want our students to think about is the entire ballgame. We then need to decide if technology is a meaningful integration, or if it just makes the crappy worksheet a polished turd covered in Canva's fairy dust.


This is why UDL (or other frameworks) matter. This is why the ISTE technology standards matter. This is why SAMR was constructed as a model. We need to return to the roots of our proper pedagogical practice, figure out what matters, and finally get it right for our students.


The device was never the problem. The plan was.


Why do you think I've spent more time writing about pedagogy and thinking routines in comparison to my love of technology? We need to hard reset our practice. We need to figure out how to actually prepare our students for the world beyond primary/secondary education and focus on how to get them ready.


We need to have an honest conversation about what we've been doing wrong.


So Where Do We Go From Here?


Here is the good news, and I mean this sincerely, because this post has been a lot of hard truths, and I owe you at least one.


We have done this before.


In 2001, when Google started reshaping what it meant to find information, the teachers who thrived were not the ones who banned it or surrendered to it. They were the ones who asked a different question entirely. Not "how do we stop students from Googling?" but "given that students can now access any piece of information instantly, what is actually worth my time to teach them?"


The answer then, and the answer now, is the same. Everything a search engine cannot do. Evaluate. Question. Synthesize. Wonder. Connect an idea to a lived experience. Ask a question worth asking in the first place.


AI raises that same question one level higher. Given that students can now generate a convincing essay, solve an equation, summarize a document, and produce a presentation in under a minute, what is actually worth teaching? The answer is still the same thing it has always been. The thing we kept deprioritizing, because it was harder to grade and harder to standardize.


So here is what I'd actually suggest, practically, starting this week.


Start with UDL, but not the binder version. Not the compliance checklist version. The real version, which asks one deceptively simple question: "Does this task require my specific student, with their specific experiences and their specific thinking, to do something that no one else could do for them?" If the answer is no, if any student, anywhere, could complete this task in the same way, then the task needs rethinking. Not because AI exists. Because that was always true. AI just finally made it impossible to ignore.


Run your current assignments through the SAMR lens and be honest about what you find. If most of them live at Substitution, that is not a reason to despair; it is a starting point. Pick one and ask: "What would this look like one rung higher?" "What would it look like if the technology made something possible that wasn't possible before, rather than just replacing something that already existed?"


Redesign one assignment around the process, not the product. The unessay framework, the documented journey, the annotated bibliography, any of these shifts the grade away from what AI can generate and toward what only your student can show. You don't have to overhaul everything. Start with one.


Model the thinking. Out loud. In front of your students. Show them what it looks like to wonder, to not know, to find out. Because Dan Fitzpatrick is right that employers expect AI-ready graduates. And Dan Thomas is right that the assignment is the design problem. And give.spark is right that we built the system that devices exposed.


None of that gets fixed by going full edtech (you should never go full edtech) or reverting to the one-room schoolhouse and go full analog. It gets fixed by you, designing with your students in mind. You should be asking, "Will this activity get this group of students ready for whatever future they choose?" Not asking "Will this activity get me through the next 20 years of my career?" You need to take your expertise and decide what's right for your group of students right now, not through a wholesale doctrinal shift in your pedagogical practice. Make today's lesson better than yesterday's.


That's the full ask.


The Lesson Was Always There


I was a senior in 2001 when everyone panicked about Google. And somewhere between then and now, most of us quietly learned that the panic was misplaced, that the real work was never about controlling the tool. It was about teaching students what to do with it.


We learned that lesson. Imperfectly. Slowly. Incompletely. ... but we learned it.


AI is asking us to learn it again. Faster this time. At higher stakes. With less patience for the kind of slow institutional change that let us coast on patched holes in our pedagogical roofs for two decades.


However, the lesson itself has not changed. It was never about the device. It was never about the website. It was never about the paper. It was about whether we had the courage and the intentionality to ask students to actually think.


We had that courage once. The Google generation needed us to find it, and eventually, we did.


The AI generation needs us to find it again.


This time, let's not make them wait twenty years.

Stay Connected

 

© 2023 by teacher's plAIground. Powered and secured by Wix 

 

bottom of page