Hi everyone and welcome to our two-day discussion on Assessing the Four Components of Reading with Dr. John Strucker! Please feel free to make comments and ask questions.
Dr. John Strucker currently provides online and face-to-face staff development in the areas of diagnostic reading assessment and reading instruction for adult educators in Massachusetts. Prior to that he was a Research Associate at the National Center for the Study of Adult Learning and Literacy (NCSALL), lecturer in education, and Director of the Adult Literacy Lab at the Harvard Graduate School of Education. He has also served as a project director for World Education, Inc. in the areas of reading and assessment, and following that he worked with Education Development Center (EDC) to design a reading assessment for out-of-school youth in developing countries. He got his start in adult education in Cambridge, MA, where he worked as an ABE reading teacher for 11 years.
Please note that apart from STudent Achievement in Reading (STAR) that was developed by the Office of Career, Technical, and Adult Education (OCTAE), the mention of any specific reading assessment or reading instruction technique should not be seen as a recommendation or endorsement by OCTAE but are only given as examples. Also, OCTAE is planning to provide updates to the evidence-based reading research base soon.
Today, we will discuss silent reading tests. Tomorrow, we will look at the Adult Reading Components Study.
John, many thanks for sharing your expertise with us and welcome to the LINCS Reading and Writing Community! I will lead off with the first question:
We are required to give our students an assessment as they enter our programs such as the CASAS or TABE. These assessments measure only silent reading ability. What do silent reading tests not tell us?
Steve Schmidt, Moderator
LINCS Reading and Writing CoP
I’d like to welcome everybody to this discussion and thank Steve Schmidt and LINCS for setting it up.
First, let’s talk about what silent reading comprehension tests do tell us. They provide an
individual’s reading level in grade level equivalents (GLEs) or scale scores; and, because they are
standardized and norm-referenced, silent reading tests can be used to document learner
progress by state agencies and the National Reporting System.
There is more to silent reading tests than the familiar TABE and CASAS. The category also
includes the large-scale national and international tests that began with the National Adult
Literacy Survey (NALS) in 1992 and have continued up to the more recent Program for the
International Assessment of Adult Competencies (PIAAC) in 2012. Over the years, these tests
have yielded rich troves of information on the distribution of literacy, math, and information
technology (IT) skills among adults in the US and the other Organization for Economic Cooperation and
Development (OECD) countries. Background questionnaire data allow governments to link their populations’
literacy skills to important outcomes like economic success, civic engagement, and overall health. In addition, large-
scale assessments can tell us how a society’s literacy skills may be changing over time. [As an aside, from 1991 to
2012, the U.S. was the only developed country where older people had stronger literacy skills than younger people.
[Any thoughts as to why?]
However…for all their advantages, these tests don’t provide much guidance for reading
instruction – what to teach and at what level. Silent reading tests can tell you whether someone
has limited reading skills, such as a 35-year-old who scores at 5 GLE on the TABE. But silent
reading tests cannot tell you why. Effective teaching begins by asking “why.”
That’s where diagnostic reading assessments come in. Going back at least as far as the 1930s,
reading researchers and clinicians have known that struggling readers usually have difficulty
with one or more of the components or subskills of reading. These include alphabetics (phonics
and word reading), fluency, and vocabulary. Comprehension can be viewed as the end-product
of the successful development and integration of these four components.
Finally, diagnostic reading assessment of each of the four components is the only way we can
find out which component or components need to be emphasized and at what levels we should
begin instruction in each component.
I was a bit late in joining this discussion yesterday, which was very useful, and thought I would add my two cents in that regard today. I appreciated two, among great other comments made in this thread regarding the value of reading tests in instruction:
1. "But silent reading tests cannot tell you why. Effective teaching begins by asking why.” Yes. Always!
2. "... most learners really appreciate the diagnostic interview and testing. For some it may be the first time in years (or ever!) that a teacher has paid close attention to them and their reading difficulties." So true. All students respond to efforts that remind them, "I count. I'm heard."
I would suggest that diagnostic assessments provide instructors with a valuable differentiation tool. I further suggest that more often than not, students who face challenges progressing in any academic or career choice, often do so because their reading skills are not at the level demanded by course work or technical manuals in some cases. Would it be helpful, or even legal, to share diagnostic results with teachers who share students in other disciplines? Leecy
Thank you for sharing your reading assessment wisdom. I totally agree with your view on diagnostic reading assessment (DRA) and have trained many teachers on the purposes and processes. Most (if not all) see the value of identifying strengths and weaknesses in the reading process and are sometimes surprised by the results. In general, students are receptive and appreciative of the time a teacher takes to look closely at their reading skills. An explanation of each test purpose often reduces students' anxiety and assures them that the results will be used to improve instruction and skills.
Thanks John and Marn. I would like to continue the conversation with another question:
Two students can have identical silent reading profiles but have vastly different instructional needs. What can we as educators do to ensure our students are getting the instruction they need to improve their reading skills?
Thanks! Your post picked up on two points I wish I'd mentioned - that teachers almost immediately realize the value of diagnostic assessment once they have an opportunity to try it. That's why it's so important that training in diagnostic assessment should give teachers a chance to assess a real learner, rather than just simply reading about it or watching a video presentation. Your second point is even more important - that, far from being put off, most learners really appreciate the diagnostic interview and testing. For some it may be the first time in years (or ever!) that a teacher has paid close attention to them and their reading difficulties. And, as you mention, it really helps to relieve any anxiety on the learner's part prior to testing if you explain the purpose of each test and how it will work.
Jeanne Chall, who taught me so much about reading, believed that the diagnostic assessment session was the ideal way to set up the working relationship between the teacher and the learner. The teacher says, in effect, "With your help, we're going to find out what you're good at and what needs to be improved in your reading. With that information, I'll be able to design lessons that will help you to improve your reading and reach your goals as fast as possible."
Let me take up Steve's next question:
Two students can have identical silent reading profiles but have vastly different instructional needs. What can we as educators do to ensure our students are getting the instruction they need to improve their reading skills?
This question connects directly to “What Silent Reading Tests Alone Can’t Tell You,” an article I
wrote for NSCALL’s Focus on Basics back in 1997 (NCSALL: Reading). In case you haven’t had a
chance to read it prior to this discussion, let me summarize it. Two adult learners, “Richard” and
“Vanessa,” were my students when I taught at the Community Learning Center in Cambridge,
MA, in the mid-1990s. Richard and Vanessa had identical GLE 4 scores on the TABE in
reading comprehension. But the two learners couldn’t have had been more different in terms
of their strengths and needs in reading.
Richard was a native English speaker who reported severe reading difficulties in childhood.
When I assessed him in the components of reading, his Word Analysis (phonics) level was GLE
1.5, Word Reading was GLE 2, Spelling was GLE 1, Oral Reading Fluency was GLE 4, and Oral
Vocabulary GLE 6.
Vanessa was an English language learner who had nearly completed high school in Peru. She
had been living in the US for 7 years and was a fluent speaker of English. Her profile was almost
the reverse of Richard’s: Word Analysis (phonics) was GLE 3 (the highest score possible on that
test), Word Reading was GLE 5, Spelling was GLE 5, Oral Reading Fluency was GLE 5, and Oral
Vocabulary was GLE 4.
In short, Richard knew the meanings of more words than he could decode, and Vanessa could
decode more words than she knew the meanings of. Richard’s severe decoding difficulties
impeded his comprehension, while Vanessa’s limited knowledge of English vocabulary was
holding her comprehension back. We were able to design the teaching approaches to each learner based on their different strengths and needs, i.e., their different reading profiles.
[An important aside – in addition to assessment of the components of reading, we gained
important insights into their reading from a background questionnaire covering the learners’
educational history, self-reported reading challenges, and career goals.]
Richard got extra help in phonics, word reading, and fluency, both in class and from a volunteer
tutor. The diagnostic tests told us precisely which sounds to begin with in phonics and what
levels of oral reading to use to improve accuracy and what level to use to improve fluency and
Vanessa was already quite skilled at English phonics, word reading, and fluency. Because
Vanessa reported doing well and nearly completing high school in Peru, it was likely she
possessed some content knowledge in social studies, science, and literature. In addition to
working on English Signal Words (e.g., despite, moreover, nevertheless) reading_signal_words.pdf
(iwu.edu) and academic or Tier 2 words, Vanessa was also made aware of how to take
advantage of Spanish/English cognates. This would help her to transfer the content knowledge she had acquired in
Spanish to English. (There are also lists of Signal Words in both English and Spanish. They’re critical to reading
comprehension but they can be challenging to learn in a foreign language.)
Employing these different instructional approaches worked for Richard and Vanessa. Richard's reading
comprehension jumped one full GLE in a few months, based on his improvement in decoding and fluency.
Unfortunately he had to drop out to take an extra job to support his mother when she got sick. Vanessa improved
her English reading dramatically, then she enrolled in a medical technology program in community college, and
used her bilingual Spanish-English skills to land what she called her "dream job" as a medical secretary.
Obviously, being able to create diagnostic reading profiles of adult learners is only the beginning. Teachers still
need to know how to teach each of the components effectively. To work with learners like Richard who have severe
decoding problems, it’s necessary to have training in one of the “structured language approaches” like Orton-
Gillingham, Wilson, Lindamood-Bell, or Reading Horizons. To be able to work on the academic or Tier 2 Words that
challenge so many learners, STAR training is invaluable. That training is now offered to adult educators in a
number of states. The STAR vocabulary approach is based in techniques developed by Beck, McKeown and
Kucan. (Amazon.com: Bringing Words to Life, Second Edition: Robust Vocabulary Instruction
(8601405872920): Isabel L. Beck, Margaret G. McKeown, Linda Kucan: Books)
Thank you, Dr. Strucker, for leading this discussion. After I participated in STAR training, I really enjoyed doing individual reading diagnostics with each of my students. (People are rightly pointing out that students often see these in a positive light - that was certainly my experience too!) Not only did it force me to schedule 1-to-1 time with each student (something that sadly can be tough to do in a busy class), but it provided such detailed, actionable information. I was surprised how often vocabulary was the weakest area of the four for my students; in one semester, that was true of 80% of the students I assessed! When you have that type of information, it really influences how you plan lessons and what you do before springing a text on students!
Thanks for your comment. Your point about finding enough time for individual diagnostic tests is right on the money. It's the biggest obstacle most teachers and programs face. I'll have some more to say about this later in the discussion. I also really want to highlight what you said about being surprised that vocabulary was the weakest area for a big percentage of your students. To me being surprised by that makes total sense - vocabulary problems are very difficult to notice in class, as compared to decoding and fluency issues which are apparent every time a student reads aloud. Vocabulary is sort of a "sleeper," in that it's harder to notice weaknesses in vocabulary in ordinary class interactions. Usually the only way you pick up vocabulary problems is by giving a vocabulary assessment. And, the best kind to give is an expressive test like DAR Word Meaning where you ask the respondent, "What does abundant mean?" and they tell you in their own words. You not only get a GLE mastery level (breadth of vocabulary knowledge), you also get a sense of how well the student defines words they know (depth of vocabulary knowledge). As you learned in your STAR training, readers need both breadth and depth of vocabulary to fully comprehend what they read.
And, one more thing about vocabulary, even though I had been teaching adult reading for over 15 years when we did NCSALL's Adult reading Component Study (ARCS) in 2000, I was very surprised at the relatively low vocabulary levels on both the DAR and PPVT tests among native English speakers in our study.
Thanks for all these great comments!
I have another question for you please: Why should instructors spend the extra time to give specific diagnostic reading assessments in alphabetics, fluency, vocabulary, and comprehension?
To me it’s never been a question of “extra time.” Diagnostic reading assessments are an
essential part of instruction itself. When working with adult learners, instruction without
diagnostic assessment is like “flying blind.”
Sometimes experienced teachers tell me they don’t need to waste time giving individual
diagnostic assessments, because after a few weeks in class the teachers can pretty much figure
out “who needs what” in reading. Even if this were true (I doubt whether even a very
experienced Wilson-trained teacher could tell where to place a new learner in the phonics
sequence without first giving them a placement test) a few weeks of figuring out “who needs
what” is time wasted that could have been spent teaching them what they need. During those
few weeks of “figuring out,” you’re spending some of the time teaching some of the students
stuff they don’t need, or teaching stuff they do need at too high or too low a level. Imagine
you’re an adult learner who experienced years of school failure because of your reading difficulties.
If your adult education teacher spends two or three weeks or more giving you material that’s too
difficult during the time it takes for them to “figure you out,” this can reinforce your doubts about
whether you can ever improve your reading.
I don’t mean to downplay ABE teachers’ knowledge and experience. Once they learn how to conduct
and interpret diagnostic reading assessment, they become very skilled very fast. In my experience,
ABE teachers tend to be far more insightful about reading difficulties than many K-12 teachers, in part
because a huge percentage of their ABE learners have reading difficulties. K-12 children often receive
pullout instruction from special education teachers and reading specialists who also offer guidance to
classroom teachers on how to work with those children. There are no pullouts in ABE - ABE teachers
are their own reading specialists.
Commitment to diagnostic reading assessment starts at the top with program administrators. They
need to establish that it’s not a waste of time to perform diagnostic reading assessments with each
learner. They can make it clear that they believe time spent on diagnostic assessment at the beginning
of the semester will lead to more effective teaching and better learner outcomes throughout the semester.
Finding time and space to perform the individual assessments can be a challenge. Let’s deal
with time first. With training and a little practice, most teachers can administer a brief informative
background questionnaire and the appropriate diagnostic assessment in 30-40 minutes per learner.
Some programs have had success getting learners to come in for assessments before the start
of the semester so that no class time is lost.
When I taught in an ABE program, I made use of volunteer tutors to help me test my entire
class within the first two weeks of the semester. Our classes were 90 minutes long. For the first
two weeks of class, I did an introductory activity for the first 20 minutes of each class, then
turned the class over to the tutor to carry out activities that I had planned for her – maybe a
writing assignment, collaborative oral reading, or even having her read aloud material to the
learners that they could discuss afterward. With today’s technology, I might have the class
listen to a podcast and discuss it, or have the class watch and discuss some videos on topics like
study skills or career options.
With the help of my wonderful volunteers, I was able to assess two to three learners a night. If a
couple of learners were able to meet with me before or after class, I was able to get an entire
class of 12 learners assessed within the first two weeks of the semester.
Finding private space for diagnostic assessment was always a problem in the crowded learning
center where I worked. Sometimes I was lucky enough to find an empty office, but at other
times I had to make do with the end of a hallway or stairwell. Not perfect, but well worth it.
Although it can save time by having a counselor assess all new learners and pass on the results
classroom teachers, I believe it’s preferable for each teacher to do their own assessments.
Performing the assessment and administering the background questionnaire provide valuable
insights into the learner’s reading that the teacher can translate into instruction.
One last suggestion – even if teachers are doing their own diagnostic assessments, I believe
they should be given paid time to prepare a brief one-page Reading Profile Report on each
learner. If the reports are included in each learner’s file, they can be very helpful to the next
year’s ELA teacher, or to other colleagues, such math or IT teachers. If the math teacher is
puzzled by a learner who seems to have good concept knowledge, yet struggles with word
problems, a Reading Profile Report that highlights the learner’s difficulties with either word
reading or Signal Word vocabulary could help to explain the learner’s struggles with word
Thank you for this well detailed description of how an ABE instructor, with all the time constraints and multi-level reading skills that are common in our classrooms, can produce good, actionable assessment results within the first couple of days. I had been working on implementing this in my classrooms prior to COVID exit, and found that my challenges were 1. staying true to my own plan for a volunteer to work with students while I worked through all the assessments individually and in a private space (and yes, another advocate for the initial student reader profile, which is from the work of McShane, 2005: Applying Research in Reading Instruction for Adults ) .
During my research in EBRI to improve my practice, I have worked to find the DAR-2 assessment battery. I followed a link to HERE, to ed. psych company in Singapore, and thought that was strange (I emailed and, although it is on their site, I received a response this morning that they do not work with that product), because I know that ProEd, Inc owns it now. I was in touch with them 2 years ago, and it appears to be somewhat "shelved" and not much managed as an assessment product. Is this still the recommended assessment for reading components?
I have found that with Wilson (Just Words), whose market is really K12 (although Barbara Wilson started in adult ed, and when I asked her why I never saw her Wilson Learning at adult conferences, she said she found there wasn't a market for her products! I became stalled in my efforts and have still not managed to advocate clearly enough within my own institution to gain access to these critical tools of assessment and instruction.
And I don't believe Colorado is a STAR state, so thought that I did not have access to their trainings because of that. I can take another look.
So I I am always seeking collaboration and community with others in the field (in this group!) who understand how critical the right tools are so we know we are offering our students reading evidence-based reading instruction, and who maybe can confirm or correct the paths I have been following for the best tools.
I was reluctant to ask this question...but Steve just opened the door! Although DRA has tremendous value, it does require extra time, space, and staff. This has been the greatest challenge for many Minnesota STAR programs across the years. It has been further challenged by the realities of distance (or remote) teaching, learning, and access. I did create screen shareable slides for the Word Reading Test and Sylvia Greene's Informal Word Analysis Inventory, but remote testing is still time-consuming and sometimes awkward. What advice do you have for addressing the time demands of DRA -- either in person or from a distance?
Thanks for talking about how the assessments can be done remotely, Marn!
My response to Steve's question is above, but in my answer I didn't address remote assessment. I've never administered diagnostic assessments remotely myself, but I did a Zoom workshop on remote assessment with a group of Massachusetts teachers. Briefly, here's what we came up with:
Option 1 - When a learner only has a phone (landline or mobile): You contact them by phone, and do the background questionnaire (BQ), and if they have time the DAR Word Meaning Test during that first phone call. You explain that you'll be mailing them student copies of the Word Reading and Oral Reading tests (photocopies rather than DAR originals), and you set up a date and time for a second phone call about a week later when you will do those two tests with them. During the second call, while they read from the student copies, you score the assessments in the Student Response booklet. (You ask them not to look at the tests ahead of the second call.)
Option 2 - If after a phone conversation, you establish that a learner has a laptop, tablet, or computer, and sufficient home bandwidth, plus an email account. You do the BQ and the DAR Word Meaning via phone. If they don't have an email, you could try talking them through how to set up an account. If there is time during that call, you try a Zoom call to their email. (If there isn't time, set up a second appointment.) If you and the learner can set up a successful Zoom connection, you use Zoom's screen share feature to share the student copies of the Word Reading Test and the Oral Reading fluency test. The learner reads what you put on the screen, and you score their responses in the student Record Book.
Our MA teachers had already been doing Zoom instruction for several months. Those who had become quite adept suggested creating short videos to help learners set up email or Zoom.
We felt that it might be too challenging to score Word Analysis remotely because it might be difficult to hear phonemes given the limitations of cell phone and Zoom audio. But I'd be interested to know whether anybody has tried it.
Hope this is helpful and best wishes,
Thank you, Dr. Strucker, for your additional ideas on remote assessment (and personal replies to each contributor). I will pass them along to Minnesota STAR/EBRI teachers. In many of our ABE programs, support staff and volunteers were "employed" to assist with DRA. I trained most of them, and retired teachers are often the best source of willing and able testers! Their comfort with testing and teaching transfers easily, and they are very capable of wearing both hats - as needed or requested. I know a few STAR teachers have tested students remotely using the Word Reading or Word Meaning Tests. They present the WRT with slides over Zoom or Google Meet and the WMT over the phone or video. In these challenging pandemic times, even if teachers can only conduct one test, it benefits and informs their reading instruction. And into the (normal?) future, one reading test and results could inspire more use of other reading tests and tools!
Welcome, Dr. Strucker! I'm so appreciative of you and Steve putting together this chat on one of my favorite subjects! One question I would ask for those just starting out in reading instruction: If I know that a DRA is best for my students, but that's all I know, where should I start? Is there a best go-to assessment that you'd recommend?
Looking forward to learning more on this important topic!
Teaching and Learning CoP Moderator
The most reliable and teacher-friendly diagnostic reading battery is the Diagnostic Assessments of Reading (DAR) (Roswell, et al.) https://psyresources.com/product/dar-2-diagnostic-assessments-of-reading-2nd-edition/. It's was the battery that we used used in NCSALL's ARCS.
But they key issue is getting training in how to use and interpret diagnostic reading tests and what they results mean for instruction. STAR also involves extensive training in EBRI approaches to teaching reading for ABE GLE 4-9 intermediate readers. If STAR training is available in your state, I highly recommend it. You can find more about STAR elsewhere on LINCS https://lincs.ed.gov/state-resources/federal-initiatives/student-achievement-reading.
In Massachusetts, along with my colleague Pam Dempsey-O'Connell, I teach a hybrid course on diagnostic reading assessment for SABES, the state's staff development collaborative. The course includes how to give the tests and it features a practicum in which participants assess a learner and write up their results in a Reading Profile Report.
Here's a question for Steve Schmidt and any other folks following our LINCS discussion: Are there any other courses Susan could take on diagnostic reading assessment?
Welcome, Dr. Strucker! I'm so excited that this discussion is happening and am following as I'm in and out of teaching and meetings. I hope to contribute a bit later today and tomorrow. I'm always looking for information to validate what I'm doing (DRA and instruction addressing the four components of reading with STAR) as well as to learn more about how to support my adult readers! Thank you for today's and tomorrow's discussions.
ATLAS Literacy & ELA Coordinator
Thanks for joining in! You're lucky to have gotten STAR training. As the discussion unfolds today and tomorrow, I'd be interested to hear from you and other STAR practitioners about how it's going. In the Harvard Adult Reading Lab we used many of the approaches that later got refined into STAR, but I never got the benefit of the training or a chance to use it in in a real ABE class.
I look forward to hearing more from you as time permits, of course!
Hello John, It's wonderful to have you with us. Thanks so much for the information you've shared so far.
Anita, it is interesting that you discovered that vocabulary was an issue for all the learners you assessed. I wondered whether these were English learners. In my experience, it's quite common for English learners to have limited vocabulary-- even those who have a solid educational foundation.
For me, the book John referred to above, Robust Vocabulary Instruction, by Isabel Beck and her colleagues has been one of most important books on teaching vocabulary. These researchers/authors came up with the brilliant concept of word tiers.
- Tier 1: the most common words in oral vocabulary, e.g., baby, drive, friendly
- Tier 2: general academic words that are used across disciplines, e.g., conceptual, evidence, analyze
- Tier 3: content/context specific words, e.g., judicial, exponent, photosynthesis
The authors reported that, according to their research, few teachers were teaching Tier 2 words, and yet these words are extremely common and essential to comprehending text.
John, you mentioned STAR training. How would a program or a teacher find out about this training?
Take care, Susan Finn Miller
Moderator, English Language Acquisition CoP
So nice to hear from you! You did a superb job of explaining Tier 1, Tier 2, and Tier 3 words. Our field is indebted to Beck, McKeown, and Kucan for introducing this concept and for elaborating what it means for teaching. As you know, Mary Beth Curtis made Tier 2 instruction central to STAR. One thing that Mary Beth stressed about Tier 2 Words that sticks in my mind is that authors, especially authors of content area texts, almost always define Tier 3 Words because they carry the content. But authors never define Tier 2 Words, even though, as you point out, they are essential for comprehension. Consider this sentence: Plants require abundant light for photosynthesis. The author will have defined the Tier 3 word and concept, photosynthesis, but would never have bothered to explain the Tier 2 Word abundant. But abundant is essential to the meaning of the sentence.
Dr. Strucker, I have great appreciation for your work! Thank you for being here and to Steve, for organizing.
I want to take a stab at the question you posed based on PIAAC data: "As an aside, from 1991 to 2012, the U.S. was the only developed country where older people had stronger literacy skills than younger people. [Any thoughts as to why?]"
I am currently reading Natalie Wexler's The Knowledge Gap. She is an education journalist and spent many years trying to find answers to why k-12 reading scores were going down, nationally. With an emphasis on testing comes a skills-based curriculum, reducing time spent on developing content (knowledge) and vocabulary. Without rich, connected, repeated CONTENT development, from Social Studies and Science as well as literature, but MOST importantly from non-fiction content, students are being taught to learn what a caption is, how to "infer" and how to "find the main idea", among many other skills. Without the ability to understand content and the vocabulary and context so important to it, these skills lessons don't "develop" readers. Reading and being read to, on grade level and with vocabulary enrichment, develop knowledge that then allows students to attend to the strategies and skills of comprehension.
I was seriously JUST reading this yesterday!
I haven't read Wexler's book, but I think she's on the right track - i.e., that we're just coming out of five decades when American children have been reading fewer content-rich texts. For all their faults, the Common Core Standards (CCRS), which are the basis of our CCRSAE, call for having children read more complex informational texts and at higher levels.
From the 1960s on, K-12 students were exposed to less challenging reading - in short, school books got much easier. Here's one example: "Hayes et al.'s analysis indicated that the wording of school books published from 1963 forward for eighth graders was as simple as that in books used by fifth graders before 1963, while the wording of twelfth-grade literature texts published after 1963 was simpler than seventh-grade texts published prior to 1963.” (Quoted in Adams, M.J., https://www.aft.org/pdfs/americaneducator/winter1011/Adams.pdf)
It has affected all levels of readers, from those going to elite universities to those who become ABE students.
Some people assume the drop in reading ability has to do with video games and other technology - but these distracting influences are prevalent across all the OECD countries, so something else has been happening here.
Thanks for weighting in on this question!
Susan and Dr. Strucker,
I'm also excited to hear Dr. Strucker's response about a go-to assessment. The best one I have found for adults is a resource from Appalachian State:
I love that DRA directs an instructor to specific aspects of reading. Is there a list of Tier 2 vocabulary words for different grade levels in the Bringing Words to Life text that you referenced, Dr. Strucker?
I've recently been working with an upper elementary student who needed to work on fluency, vocabulary, and breaking apart multisyllabic words. I started by having her identify unfamiliar Tier 2 words, apply syllable rules, and then try to determine word meaning based on reading a sentence with each unfamiliar word. She is learning to decode large words and building her vocabulary simultaneously. For fluency, she needed to learn to scoop words into phrases and clauses to be read together. With these tailored interventions, she has experienced tremendous growth. Although all the assessments took time, it was worth it to pinpoint the areas that needed improvement.
I feel that DRA can help adult instructors expand their reading intervention tools.
We all know from standardized assessments when students aren't comprehending what they're reading, but we need to know why. Is it a decoding weakness, lack of vocabulary knowledge, or even disfluent reading that is preventing comprehension? This information is a game-changer for improving student reading outcomes.
Thanks for the link to the resource from Appalachian State. I'm very familiar with the assessments: Jeanne Chall and I designed the QARI Word Reading Test back in the late 1990's; Sylvia Greene (the author of the Informal Word Analysis Inventory) taught me how to teach phonics when we worked together at the Community Learning Center in Cambridge; and my NCSALL colleagues Rosalind Davidson and Kelly Bruce created the Word Meaning Test in 2000. The fluency test reflects great work on the part of Ohio literacy teachers.
These tests aren't quite as good as the DAR, but they are free. We use them for the hybrid course in diagnostic assessment that I teach in Massachusetts.
I really enjoyed reading how you used diagnostic assessments to plan focused instruction for the youngster you are tutoring. She's a lucky kid to have you in her corner.
Thanks, Beth and John, for your replies regarding diagnostic assessments. I am a HUGE fan of the StAR materials and use them often. I'm very pleased to know of the App. State diagnostic, Beth--very helpful
Many thanks again for your thoughtful answers to each question and comment! Here is one last question from me for today:
What are some general principles for teaching students with uneven reading profiles?
This could easily make up be a whole LINCS discussion of its own. And, of course, dealing with uneven profiles is
central to STAR training. As background, be sure to consult Teaching Adults to Read: A SUMMARY OF
SCIENTIFICALLY BASED RESEARCH PRINCIPLES - 2005 and Applying Research In Reading Instruction for
Adults 2005 (ed.gov). But let me briefly list some of the principles I’ve found most helpful for teaching students with
1. Since you’re going to be teaching 3 or 4 separate components over the course of one
class, you should make a lesson plan specifying each component you plan to teach, to
whom, how long that part of the lesson will take, and what materials you will be using.
2. Remember, you are teaching several components because they support each other and
together lead to improved comprehension. Stick to your lesson plan – if you planned to
spend 20 minutes on collaborative oral reading, don’t extend it to 30 minutes, even if
the learners are enjoying the heck out of it. Doing so means short-changing the one or more of the other
components, and all components are important, or you wouldn’t be teaching them in the first place.
3. You will probably have to group learners’ profiles based on their differing profiles. STAR
training goes into detail on how to do this for GLE 4-8 classes. In a simplified example,
imagine a class made up of “Richards” and “Vanessas”. While you’re working on phonics
and decoding with the Richards, the Vanessas (who don’t need phonics and decoding)
could be working on one of the online Spanish/English cognate sites – assuming they
were all native Spanish speakers – or writing and discussing sentences with each other
using English Signal Words. You might be able to do an academic or Tier 2 vocabulary
lesson with the whole class, but oral reading as one whole group might be a challenge.
That is because the Richards would tend to read slowly and haltingly, and the Vanessas might
have trouble understanding why the Richards are struggling and show their impatience.
4. Formative assessment is vital for all ABE classes. It’s critical for teachers because it
forces us to ask, “What do we want the students to learn, and how would we know if
they learned it?” Formative assessment is also critical for the learners because ABE
learners often underestimate their own progress and become discouraged. Formative
assessments remind them of what they’ve learned and help them to remember and use it.
5. Be constantly aware of the level of challenge presented by lessons in each component -
alphabetics, oral reading material, and vocabulary lessons. If the material is too easy, it’s
like lifting weights that are too light - it won’t build strength. If the material is too hard,
it’s like trying to lift weights that are too heavy to budge off the floor, and that won’t build
6. This follows from the previous point - don’t guess at the difficulty of reading material.
Use free online readability sites to check the readability of any texts you’re planning to give learners. AUTOMATIC
READABILITY CHECKER, a Free Readability Formula Consensus Calculator
(readabilityformulas.com) Readability won’t tell you the whole story of whether a text is too hard
or too easy for your learners, but it gives you a place to start, and it will help you to avoid mistakes like this one:
Some years ago, I observed a class of GLE 5-7 readers where the teacher had given the
class an article from the newspaper that she felt would be interesting because it dealt
with a hot-button local issue. But the class struggled with the article from start to finish – oral reading was choppy
and disjointed and silent comprehension was impossible. After the class she expressed surprise to me that her
class had found the article so difficult. She was sure that the newspaper, a tabloid, was always written at 6th grade
level. When we ran the article through an online readability calculator, it came out at 12th grade. Two lessons worth
remembering – never guess at readability because it can definitely fool you, and it is a myth that entire newspapers
are written at one level. There is a lot of variation within each paper: as you might expect, sports and pop culture
articles usually have much lower readability levels than news, op-eds, or long features.
Such great points here, John, about readability and "eyeballing" text. I once got stuck in a short text that came out at an appropriate GLE for my students, but the text was loaded with adjective clauses and appositives, and I found myself in the middle of a tough lesson when students were not tracking the text because they weren't connecting these modifiers to the nouns and pronouns they were modifying! If I would have had a lens for viewing text beyond just a GLE, I would likely have noticed this complexity before I ended up in the middle of the ocean without a life preserver! This example set me off pre-STAR training and CCRS to pre-teach a particular grammar structure just as I would pre-teach any other content to support students working with text and to contextualize grammar instruction whenever I could ("Okay, students, highlight all the adjective clauses in the paragraph of text we just read and draw arrows to what the clauses are modifying!").
Once I started working more with readability formulas and quantitative and qualitative assessments, I realized that my "eyeballing" (or always relying on a publisher's GLE for a text) was not the best way to go. Many newspapers may be written at a 6th grade or intermediate-reading level, according to a readability algorithm, but the level of vocabulary, number of text structures present, number of sentences with multiple phrases and clauses, background knowledge assumed by the writer, etc., can make what at first blush seems to be appropriate out of range for a particular group of students. For me, so much depends on the purpose of the text I'm using whether for fluency practice, to build knowledge, for learning a reading strategy, to inspire, and so on. There is so much variation in one level of text!
Something I learned quickly when doing STAR DRA is that, in general, the students I was testing would score higher on leveled texts (both comprehension and fluency assessments) with a narrative structure than they would with more academic text structures. This information--when combined with an understanding of Tier-2 words and how to teach them as well as the realization that many of my Low Adult Secondary (or sometimes higher) students in my GED/diploma classroom have fluency issues--changed the game for how I work with students. I am thankful that my STAR training came in 2008 right after I completed my K-12 reading studies in graduate school because much of the content of STAR (explict strategy instruction, for example) was validated by the reading classes I had just taken. I was trained on the QRI-4, and the STAR diagnostic test was in line with (though easier and quicker to administer!) what I had already learned.
You've raised several important issues. First, you are absolutely right that readability doesn't tell you everything you need to know about text selection. Readability is simply a starting point that we omit at our peril. In your example you described a text that was "loaded with adjective clauses and appositives." Without seeing it, maybe we could describe the style of that text as dense? As you explain, this example might come under the heading of qualitative evaluation of text - which includes levels of meaning, structure, and language conventionality.
Readability formulas rely on sentence length and word length, which certainly contribute to style and complexity. But, readabilty formulas miss many aspects of style.. For example, Shakespeare's plays are written as dialogue, so many scenes come in with very low readability levels. However, they can be difficult for us to understand because many of the words are archaic (bodkin), and the word order has been rearranged or reversed to make a line or sentence fit his metric scheme.
As you point out, teachers should pay attention to qualitative aspects of text complexity. The other aspect of text complexity is what the CCRS call reader-and-task. In a way, this is what you were you were doing retrospectively in your example above when you realized that the adjective clauses and frequent use of apposition were bedeviling your students. Your knowledge of your students and their experience with different texts, plus your ability to identify what might be unfamiliar or too dense for your students to understand, enabled you do to do this.
You're probably familiar with the CCRS rubrics for determining qualitative and reader-and-task text evaluations, but let me include them here for folks who haven't seen them: Appendix B: Text exemplars for ELA, Social studies, Science and Technology. www.corestandards.org/assets/Appendix_B.pdf "Appendix B" also includes text exemplars of literary and informative texts at different grade levels. As part of the Common Core, they were designed for K-12, but are quite applicable to our adult learners.
Thanks again, Kristine, for delving into this issue. Maybe LINCS could organize a discussion totally devoted to text selection?