In September of 2012 Irene Fountas and Sue Pinnell, creators of the F&P Text Level Gradient and the leading voices in reading intervention as it currently exists, put out a white paper through their publisher, Heinemann Publishing, that explained their decision to make “minor adjustments” to the grade-level goals on their text level gradient, which has long been a ubiquitous presence in schools and on literacy web sites across the country. In this document, they write that trends in early literacy are, in effect, forcing their hand in regard to revising the gradient due to (among other factors) the “exposure to rich literacy experiences” that many incoming kindergarten students have had, the increase in full-day kindergarten, and the rising expectations of educators. “Achievement in literacy is trending upward!” they crow, adding with an air of condescension, “and that is good news.”
As a result, the revised F&P Text Level Gradient offers more clear delineations between grade level expectations, removing any overlaps between grades, and includes higher text-level expectations at the end of kindergarten and grade one. This despite a pointed invitation to readers of their book Guiding Readers and Writers to observe the original gradient and “notice that there is no rigid division between grade levels” (2001). A side-by-side comparison of the original gradient and the current gradient is below:
While at first glance the changes appear to be somewhat benign (after all, the end-of-year expectations for grades four and five have actually decreased, according to the new gradient), they in fact can lead—and have led—to dire consequences for children, particularly those in the elementary grades.
Let’s set aside for the moment that the idea of a child reading “on level x” is in itself a fallacious concept and one that the authors themselves concede in Guiding Readers and Writers when they write, “individual students cannot be categorized as, for example, ‘level M readers.’” According to the original gradient, a student in grade one who was reading “at a level C” in October—this according to the results of a standardized reading assessment Fountas and Pinnell created that is part of their overall Benchmark Assessment System (2008)—was considered to be “on track” with his or her peers. That same student, under the guidelines of the new gradient, appears to be severely below grade level and in dire need of reading intervention.
Similarly, a student who was independently reading “at a level O” at the end of grade three would find herself being inaccurately labeled as a “struggling reader” if her fourth grade teacher were using the new gradient at the start of the following school year. To combat this phenomenon, Fountas and Pinnell warn educators to remember that “the recommended grade-level goals are intended to provide reasonable guidelines for grade-level expectations” and should not—their emphasis—use these levels as a basis for grading.
However, grading students using their reading levels as a guide and determining which students are “struggling” versus “proficient” are two very different things, and while it is rare to encounter examples of the former taking place, I have frequently, in the past, been ambushed by colleagues who breathlessly worry about this or that student “not reading on grade level.”
Like them, until recently I had been essentially brainwashed—unintentionally, of course, but due to the ubiquity of the F&P Text Level Gradient and others like it, brainwashed just the same— to believe that children who do not fall within the established ranges of acceptable reading “proficiency” are in need of intervention. Never mind that Fountas and Pinnell have a vested interest in as many children as possible being labeled as “in need of reading intervention,” considering that they are also the creators of the Fountas and Pinnell Leveled Literacy Intervention System (LLI), the main goal of which is to “bring students to grade level achievement in reading” (2014). And yet, the authors themselves are unsure of what that even really means, as evidenced by their assertion that “[readers’] background knowledge varies widely according to the experiences they have had at home, in the community, and in school….There is probably a range of levels any given student will feel comfortable reading, based on his general understanding of vocabulary, experience in reading texts with different structures, experience in reading different genres, and interests” (2001). Even the folks behind their biggest competition, reading level-wise—the Lexile (and by association, the MetaMetrics) crew—insist that “grade equivalent scores [as measured by Lexile measures] are often misinterpreted as being a grade level standard”:
The grade equivalent does not represent the appropriate grade placement for a student or the level of the material the student should be studying. Grade equivalent scores should never be interpreted literally, but rather as rough estimates of grade level performance (see the full post here).
Thus, it stands to reason that the grade-level ranges associated with the F&P Text Level Gradient—whether one accepts the revised version or instead chooses to remain loyal to the original gradient— are also “rough estimates” and should be regarded as such.
Unfortunately, the preciseness of the current gradient and the elimination of overlaps between grades and their respective text levels make it that much less likely that this will happen. The trend in education, as it has been for a while, is to find ways to explicitly measure that which, in real life, is messy, complex, and prone to influences outside the realm of school. The revised version of the F&P Text Level Gradient is just one of hundreds of examples of something that, originally intended to be a useful teacher tool, has been fetishized beyond its original intent—and in the process, is feeding the monster that insists on labeling not just students, but American schools in general—as in need of intervention.