When we learn, we construct our own understanding using mental models. Even if that understanding is triggered by attending a lecture or reading a textbook, sense-making is always an active process. Active readers do more than read words on a page; they think about what they read, draw inferences, and make personal and meaningful connections. Reading is an experience. So, when we attempt to evaluate a learning experience, the question isn’t: “Are we constructing mental models?” The question is: “How sophisticated and integrated are the mental models we’re constructing?”
One way to avoid accommodation is to construct a number of fragmented mental models instead of revising a single integrated model. The conventional wisdom is that poor-performing students learn better when skills and concepts are separated into small, easily-digestible chunks. Most educators subscribe to this conventional wisdom even though the research literature says otherwise. We ignore the research by concluding that researchers exist in an ivory tower, and that the conditions they test under do not apply in any way to the real world where we and our students live. We have constructed two entirely separate sets of theories: one to explain the research and one to explain the classroom.
The sixth-grade linear functions unit was an effort to generate cognitive dissonance in students, and enable them to shift into an active learning mindset. But it was also an effort to generate cognitive dissonance in adults. We may be able to discount research results from an ivory tower, but how will we respond when the experiment occurs in our own classrooms and with our own students? What’ll we do when students who are seemingly incapable and uninterested in learning math are suddenly actively testing their own hypotheses, playing with problems, and out-performing eighth-grade peers taking Algebra I? I shared the impact the linear functions unit had on students in one substantially-separate math class, but the same impact was observable in every class.
In an ideal world, this data would have been an inconvenient truth, leading to cognitive dissonance and massive revisions in our understanding of both students and learning. But it didn’t surprise me at all when that didn’t happen for most people. Our ability to avoid accommodation is extraordinary. Instead of testing the conventional wisdom against this new data, most people attempted to preserve their conventional wisdom by using a separate mental model to make sense of these new experiences. They essentially said: “This is how students learn in the linear functions unit, but they won’t learn the same way anywhere else.” There was no attempt to reconcile the two theories.
Now, you may be thinking that this thin facade of two entirely disconnected theories of learning—one for the linear functions unit and one for everything else—cannot possibly stand up under scrutiny. But that’s the point. By constructing a new mental model that’s isolated from the models we use everyday, we can simply throw the new mental model into some dusty corner in our brain and forget about it. Because it only applies to one narrow scenario, we never need to access it. And if we never access it, we never have to scrutinize it, and we can hold onto our pre-existing mental models without having to revise them. This is why fragmented models tend to be naive and hard to remember.
By fragmenting our mental models, we avoid accommodation and revision, and we can ignore obvious inconsistencies. This is something that a vertical learner is unwilling to do. A vertical learner would feel compelled to reconcile the conventional wisdom with the new data from the linear functions unit, probably by hypothesizing conditions under which active learning might be more effective than the conventional wisdom, and vice versa. But since this revised mental model would be based on limited data, it would feel incredibly shaky, and the vertical learner would need to test this new hypothesis by conducting more experiments and making more precise observations. Instead of burying data that we might be wrong, vertical learners become curious—engaging in inquiry and drilling down until models are well-tested and solidly grounded.