Rage Against the Thermometers – Assessment Illiteracy

As we wander about within the fog of education pursuing all manner of educational diversions in lieu of our main mission of student academic achievement, we find ourselves confounded by ideas that are essential to our work but are often poorly defined and/or misunderstood.  For 30 years I was an administrator in school districts leading efforts to improve two of the most infamous prerogatives in education: assessment and accountability.  It never ceased to amaze me the degree of assessment illiteracy demonstrated by educators. For the most part, teachers perceive assessments as impediments to instruction rather than the awesome support that they provide to both student learning and professional practices. This assessment illiteracy currently fuels what I call a rage against the thermometers.

The word assessment derives from the Latin word assidere which means to sit beside.  I would often use a painting by the famous African American Artist, Henry Ossawa Tanner, called The Banjo Lesson.  I shared copies of the painting to elicit teacher understanding of the meaning of assessment through the metaphor of this beautiful painting. Teachers would often brilliantly find elements of the education idea of assessment within the painting. They would identify how the old man supports the young child on his lap while holding the banjo so that the child can produce the music with expert coaching from the grandfather. There are so many more examples of quality assessment elements embedded within the painting. However, I will never forget a 2nd grade teacher from Redwood City who came up to me during the workshop with a complex review of the painting itself that included the use of light and color within the painting!  Her analysis is just another metaphor for to how educators can find their own meanings within the essential educational idea of assessment.


Figure 1

Jim Popham, emeritus professor at UCLA, addressed one of the important causes of assessment illiteracy within the education community at a luncheon of very self-important psychometricians in Los Angeles many years ago.  As they were eating, Jim scolded that the psychometricians by informing them that they most assuredly were going to go to hell for all of the complicated assessment regimens that they had foisted upon the education community. The spew of egg salad throughout the rooms was truly a sight to behold!  He shared many of the obtuse characteristics of Classical Test Theory and Item Response Theory for which psychometricians had done almost nothing to explain to educators.  He told the psychometricians that there might be a chance that they could go to purgatory if they made a concerted effort to better explain and communicate the purposes and uses of assessments to the K-12 education community. A nice recommendation but one that was never followed up on.

Educators may have difficulty understanding the meaning of assessment because it has two distinct components.  The first meaning of the word assessment derives from its function in the collection data or information from some stimulus or task aligned to a learning target or objective. This meaning is the one most often attributed to assessments by educators. The second and most important meaning of assessment focuses on the evaluation or the interpretation of what the assessment data means in relation to the learning target or objective.  Assessments have applicability to both student and professional practice goals and outcomes.  Educators often focus on the data collection component of assessment and find it onerous because they are not engaged in the second use of evaluations for interpretation, diagnosis, intervention, and monitoring. Many educators collect assessment data but do not necessarily engage in its second component of evaluation and interpretation creating and antipathy toward assessments.

Evidence of educator assessment illiteracy as it relates to the recognition of the evaluative nature of assessment is abundant. The Dynamic Indicators of Basic Early Literacy system (DIBELS) is a screening and monitoring assessment system that gauges the degree to which students in grades K-6 are on track for key early literacy skills that are critical to help them become fluent readers.  The five essential elements of early reading identified by the National Reading Council in the late 90’s include phonemic awareness, phonics, fluency, comprehension, and vocabulary.  The DIBELS assessment monitors the performance of K-6 students for these skills in comparison with normed peers in order to determine who is on track and who is not on track to become readers.

One of the early literacy skills that is critical for later reading success is phonics.  The DIBELS system uses an assessment protocol called Nonsense Word Fluency (NWF) in which students are asked to correctly pronounce nonsense words like NAK in order to gauge their ability to associate letter groups with specific sounds. Nonsense words are used to make sure that students can actually produce the sounds represented by specific grouping of letters because often students will have site recognition of words and can recall the letter sounds from the memory of the word not their phonics ability.

Teachers often do not understand the formative nature of the NWF assessment and what they should do in response to students who demonstrate that they are not on track for phonics understanding.  They would collect the data from the assessment but did not have the ability to use the data to identify at risk students, design and apply appropriate interventions, and then monitor the success of those interventions.  Some teachers inappropriately try and instruct students using the nonsense words within the screening NWF Test.  Teacher assessment illiteracy combined with their inability to evaluate the results of the assessment properly often lead to a rejection of the use of the DIBELS assessments as a valid and reliable system for monitoring student growth in development of the key elements of early literacy.

Teacher assessment illiteracy also manifests itself in the inability of teachers to see any value for assessments besides how assessments can be used formatively to inform student learning within the learning time frame. In reality, assessments fall on a continuum from most formative to most summative. Assessments can be used to inform student learning from moment to moment and they can be used summatively to gauge the effectiveness of teacher and administrator professional practices.  Figure 2 below demonstrates the Assessment Continuum with sample assessments for each of the key assessment elements of the Continuum.

Assessment Continuum


Figure 2

The effectiveness of the use of formative assessments in supporting student learning has been well documented over the years.  Black and Wiliam focused on the importance of the use of more formative and most formative assessments in gauging student learning, diagnosing error patterns in thinking, intervening to provide focused feedback, and finally monitoring student learning, Formative assessment opportunities often attempt to elicit student misunderstandings in order to better address them. It is recommended that there be at least three formative assessment events for every summative assessment event within a given unit of learning. (Figure 3).  Formative assessments are opportunities to elicit student misunderstandings through questioning or assessment tasks.  In this sense, we need to promote more failure in our schools followed by diagnosis, intervention, and monitoring. Hattie has reported an almost 0.7 effect size in the ability of quality formative assessments to improve student achievement. This would be analogous to moving a student who scores at a 50th percentile at the beginning of the year to the 70th percentile.

3_Loop_AssessmentsFigure 3

There is a growing understanding and use of formative assessments within schools even though it is often clouded with a plethora of naming conventions and definitions such as assessments for learning and assessment of learning.  There is a deep and widespread misunderstanding of the role of summative assessments in informing professional practice within schools and school districts. Many educators erroneously think that the only value of assessments is to directly inform student learning.  However, school organizations need to understand that summative assessments like state assessments can be used to gauge the effectiveness of curriculum and the systematic application of professional practices to improve student achievement overall and by subgroup.

Summative assessments are indicators of how successful adults are in helping students achieve. They are the thermometers or indicators of how well the professionals use their professional practices to help grow student academic success.  Yet these assessments in concert with external accountability systems have fostered widespread pathologies within K-12 education.  Rather than using these assessments to inform and improve curricula, professional practices, and assessments, educators use these assessments in an inappropriate way as instructional tools.  Due to the perceived pressure of the results of the test educators focus on teaching to the test and then rage against the test as impeding teaching and learning. Used appropriately summative assessments can go a long way in informing the system of its success or lack of success in implementing curricula and professional practices. Let’s take summative assessments out of the fog of education and bring it back into the light where it can be used appropriately and effectively inform student learning as well as the improvement of professional practices.

Summative assessments can provide student results of improvement (non-cohort), growth (cohort), standards, and equity.  These results can then be visualized and used to generate findings and answer questions related to the effectiveness of key initiatives and programs within the educational system. These summative results and interpretations play an important role in a well-planned and implemented strategic plan that would include specific student outcomes, professional practices, educational strategies and initiatives, professional development and collaboration, and key metrics. (Figure 4)


Figure 4

Quality curriculum combined with a few research-based professional practices that also includes valid and reliable assessments are the recipe for helping all students achieve academic success.  These elements constitute the three legs of a stool that supports improved student outcomes.  Because of the pressures of external accountability so well described by Elmore and others, we can see that school districts have engaged in pathologies that overemphasize the importance of summative tests even to the extent of inappropriately teaching to the test.

The solution to this problem is to make sure that school districts and schools develop and implement high quality and useful strategic plans that include a clear focus on student outcomes, curriculum, professional practices, assessments, professional development, and key performance metrics. These elements would form the basis for a high-performing educational system capable of improving academic outcomes for all students.  Assessments play an essential role in our work by providing both formative support for student learning and summative support for the improvement of professional practices.  Let’s get beyond our rage against the thermometers and dispel the fog of education by using assessments in ways that foster improved professional practices as well as student academic outcomes.


Black, Paul and Wiliam, Dylan. Inside the Black Box: Raising Standards through Classroom Assessment. 1998. https://www.rdc.udel.edu/wp-content/uploads/2015/04/InsideBlackBox.pdf

Center for Teaching and Learning. Dynamic Indicators of Basic Early Literacy. 2018.  University of Oregon.  https://dibels.uoregon.edu/

Elmore, Richard and Fuhrman, Susan. Redesigning Accountability Systems for Education. 2004. Teacher College Press.

Hattie, John. Visible Learning: A Synthesis of over 800 Meta-analyses Related to Achievement.  2009. Routledge.

National Reading Panel. Report of the National Reading Panel: Teaching Children to Read. 2000. https://www1.nichd.nih.gov/publications/pubs/nrp/Pages/smallbook.aspx

Tanner, Henry, Osawa. The Banjo Lesson. 1893. Smithsonian

Leave a Reply