If you want to know why American schools rank 37th in Mathematics in the most recent international PISA ratings (and are widely predicted to do worse in the 2022 ratings), look no further than our education system’s persistent failure to adopt Mastery Learning, a philosophy first formally proposed by oft-cited and oft-misinterpreted educational psychologist Benjamin Bloom in 1968, but with roots dating back a century. Based on the common-sense notion that students must master prerequisite knowledge before proceeding to subsequent (often more complex) learning and that they must be given ample time and personalized support in order to do so, Mastery Learning has repeatedly been found to have positive effects on overall student achievement, not just in math, but for “all subjects and at all levels” according to Stephen Anderson’s should-have-been-more influential 1994 meta-analysis *Synthesis of Research on Mastery Learning*. According to the same study, Mastery Learning also consistently produced “positive affective effects for both students and teachers.”

In other words, Mastery Learning worked, kids liked it,

and teachers liked it. So what in the world kept it from being implemented in our schools?

The educational establishment and its “recalcitrance to change,” according to many historians. Mastery Learning runs directly counter to the think-pair-share, group activity based zeitgeist that has dominated American public education for the last half-century. (I could retire in style if I had a dime for every time I’ve encountered the phrase “We’re just *exposing* them to math, not aiming for mastery” over the course of my 30-year teaching career. This shouldn’t have to be said, but just being exposed to surfing doesn't make you a surfer - and while surfing may be more fun in groups, surfing itself, like math, is mastered by individuals.)

But there was also a distribution problem; truly individualized instruction couldn’t be delivered in a timely manner prior to personal computers and the internet.

But schools have had computers since the 70’s and internet access since the 90’s! Why hasn’t Mastery Learning been implemented in the interim?

Because we haven’t had computers in each kid’s *hands* until Covid. With the iPads and Chromebooks gifted to us by Covid we can now change the world (starting with America’s dismal test scores).

So what would Mastery Learning look like with iPads and Chromebooks? Well, I can tell you what it *did* look like. Last year (2021-22), I had the opportunity to try a little Mastery Learning experiment of my own with fifty regular ed. 6th graders over a period of 3 months (January through March, during another teacher’s scheduled absence). In anticipation of this temporary teaching assignment, I assembled individualized and self-explanatory “see it, do it, check it” assignments for every Core Content standard from kindergarten through 8th (I had created these with Adobe Illustrator during Covid for use with my regular 7th grade math students) and generated “see it, do it, check it” test prep activities related to actual test items from the previous 3 years’ 6th grade state tests (New Jersey). I loaded all of these activities onto Google Classroom so that students could access them on their Chromebooks 24/7. (I point out this last part because, to my surprise and delight, the “see it, do it, check it” nature of the assignments proved to be addictive, so addictive, in fact, that in spite of the fact that I never once assigned them as homework, students frequently completed them on their own on weeknights - and even on weekends. They would even email me from home with math questions over the weekend - and high-level math questions at that! I loved it!)

The students all started at the very beginning (whole numbers and whole number operations) and worked their way through the assignments at their own pace, one assignment at a time. (Why did I start them at the beginning? To ensure that there were no gaps in their learning. Didn’t that take too long? Nah. The ones who were already up to speed with the basics knocked the early activities out quickly.) Very little whole-group direct instruction was provided. (I’m a resolute believer in direct instruction, but I wanted to see if I could make the activities self-explanatory. If a number of students raised a question on the same topic, I would stop the class and provide direct instruction on that topic, of course, and this worked especially well, as the questions arose from the students themselves.) The overwhelming majority of the actual teaching was instead done in response to individual student questions (which always involved very specific math concepts - and were very fun to answer). The students worked by themselves with pencils and paper for 40 minutes each day (with 5 minutes of free time at the end of each period as an incentive).

Each activity consisted of fully completed examples (following up on John Sweller’s should-have-been seminal research into what he called *worked examples*, research that has inexplicably been ignored by the educational establishment in this country), practice problems to try, and a key containing the fully worked-out solutions to each of the practice problems (following up on the decades of research on the critical importance of *instant feedback*, research that has *also* been inexplicably ignored by the educational establishment). The problems, in most cases, ranged from low to high in difficulty/complexity, and the activities built on each other sequentially. The students completed as many activities per period as they could, working at their own individual rates. (A sample activity involving the Pythagorean Theorem is shown below. To see some of the actual activities for yourself, contact me directly.)

*Each activity involved worked examples, practice problems, and a summative higher-level question to answer (left), and a key containing the fully worked-out solutions, along with an acceptable answer to the summative question (right). This particular activity involves trying out the Pythagorean Theorem on an assortment of both right and non-right triangles; the summative question in this case nudges students toward the discovery that the Pythagorean Theorem appears to apply to right triangles only. (Please note that this activity involves controlled discovery learning. Critics of this format complain that it lends itself to rote, formulaic, or “mechanistic” learning only. On the contrary, many of the activities, like this one, were open-ended in nature, and a good number featured only “possible solutions” on the answer key.)*

In the 1960s, Benjamin Bloom and John B. Carroll claimed that if student aptitude is randomly distributed for a given subject, and students are provided with uniform instruction in that subject (same quality, same amount of learning time), we should expect that student achievement in that subject would also be randomly distributed, as seen below.

Bloom and Carroll went on to claim that if students were instead provided with an optimal level of instruction and unlimited time in which to work (i.e., Mastery Learning), we should expect student achievement to be skewed as shown below, with the majority of students reaching mastery (more than 90% according to Bloom).

Prior to the 3 months I spent with the 6th graders, they took Form A of a commonly-used online standardized assessment containing content typically found on the 6th grade New Jersey state test. After a month of the self-instruction described above, they took Form B of the same assessment; Form C was administered at the conclusion of the 3 months. The results are shown.

As is evident from the histograms, the Form A results were skewed with scores “clumped” to the left, the Form B scores were almost normally distributed (symmetrical), and the Form C scores were skewed with the scores “clumped” to the right. The modes alone tell a tale: the Form A mode was in the 20 to 40 range; the Form B mode was in the 40 to 50 range; and the Form C mode was in the 80 to 90 range. The averages tell a tale as well: the Form A mean was 41.7%; the Form B mean was 50.3%; and the Form C mean was 64.9%, for a total mean increase of 23.2%.

Things to consider:

*These results were achieved over the course of only 3 months*. Ordinarily students would have 7 months to prepare for this point in the school year.

*The results appear to be conforming to Bloom’s and Carroll’s prediction about overall student achievement under “optimal instruction.”* They’re not quite as skewed as Bloom and Carroll anticipated, but they appear to be heading in that direction.

*These results were achieved with little whole-group direct instruction.* Most of the teaching during this time period involved walking individual students through example problems or answering specific math questions - and there was always ample time for this, due to the self-explanatory nature of the activities. Only a very small portion of the teaching involved working with whole groups (mainly because the students very quickly settled into their own, very different, levels, rendering homogenous grouping largely impossible).

*These results were dependent on students constructing and testing their own knowledge.* The worked-examples permitted the students to engage with the material directly and by themselves, and the worked-solutions enabled them to check their own understanding. (Research suggests that this strategy is more likely to result in long-term retention and self-reliance.)

*The results show dramatic increases at the low end. *Roughly 70% scored at the 50th percentile or below on Form A; by Form C that number had decreased to 20%. This was not surprising, given the extra time available for one-on-one assistance.

*The results show dramatic increases at the high end. *Only 2% of the students scored at or above the 80th percentile on Form A; by Form C that number had increased to 29%. Several of the students pushed their way into 7th grade material during this period, and one began exploring 8th. (I have always tried to adhere to Seymour Papert’s notion of “low floor, high ceiling” when delivering instruction; this format makes this goal a genuine possibility.)

*The whole curve moved.* Student scores moved from the 10% to 90% range to the 20% to 100% range, and they clumped toward the right over the time interval. In other words, the group as a whole seemed to benefit.

*The results were dependent on an abundance of specific worked examples.* Students had a completed example to refer to for each problem they were asked to solve. Some examined all of the examples first, some skimmed them, and some challenged themselves by skipping right to the practice problems, armed only with the knowledge they had obtained from prior activities. “Use ‘em if you need ‘em” was my advice on the subject. (Most of the activities had no written instructions at all, due to the self-explanatory nature of the worked examples; students with reading difficulties were thereby accommodated, and nobody got hung up due to the age-old problem of confusing or ambiguous directions.)

*The results were dependent on instant feedback.* Students had access to an answer key containing the fully worked-out solution for every practice problem they completed.

*These results were not dependent on homework.* No homework was assigned during this period.

*These results were achieved without pulling teeth and without “classroom management” of any kind.* Students were on-task 100% of the time, and there were zero discipline problems.

*These results were achieved with full student participation.* No child was left behind or denied challenging material to explore at any point during the three months. In other words, differentiation was in effect the entire time. (The content under investigation ranged from multi-digit addition to trigonometry and, in one case, pre-calc.)

*These results were achieved without “educational” games or other strategies designed to make learning more “fun.”* And student complaints were not noted during this period (except for minor grousing due to pencil-related hand-cramping).

*These results were dependent on instant access to all activities via Chromebooks and Google Classroom.* Students were able to advance through the activity sequence at their own pace, and were free to go back and work on pre-requisite skills when they needed to. In other words, *they were in full control of their learning at all times. *(Needless to say, this would not have been possible in the pre-Covid era of photocopies.)

*These results were partially dependent on student mastery of basic math facts. *On Fridays during this period the students played *FactFreaks*, a free, no-ads mobile website my son, my wife and I have been developing that gets players up to speed with all 400 basic math facts via instant feedback in a race against the clock. (Feel free to try the prototype yourself, on your phone and right now, at FactFreaks.com.)

*Students “showed their work” on every single problem they completed during this time period.* Students learned *how* to show their work from the worked examples, and I monitored them as they worked to make sure they did so. (I learned something too: kids don’t mind showing their work when they can actually *do* the work.)

*Students mastered all types of problems and content during this period.* The problems involved in the activities ranged from purely mathematical (basic operations, solving and graphing equations, measuring angles, discovery, alternate strategies, etc.) to test-prep (multiple choice, word problems, open-ended response, etc.), and the “see it, do it, check it” format permitted the transmission of all of the mathematical content prescribed in the Core Content standards - and then some.

Based on these results, and on the ease with which learning materials can now be both distributed and completed, I contend that Mastery Learning’s time has finally come. (I base this contention on the results from my regular 7th grade math students as well - who I taught the same way and who I had all year. Their performance was very similar to that above, and their state test scores at the end of the year were exemplary. I don’t go into detail about them here as the sample size was too small; there were only 12 of them. Again, contact me directly if you're interested.) Make no mistake: I make no claim to this being a rigorously controlled experiment. I do claim, however, that these results are highly suggestive of what a combination of “sages at the side,” worked examples, instant feedback, and technology might bring.

Imagine what we might achieve as a nation if these types of activities were made available to all students at all grade levels all year long. We have the method, we have the time, we have the teaching talent, and we finally have the technology. All we need now is the will.

At long last, let's turn math from something most people loathe into something most people *learn* (and maybe even grow to love).

Arithmetic for All!