PIAAC: the Program for the International Assessment of Adult Competencies

 

Colleagues,

In this issue of the CAAL newsletter, there is info on PIAAC, the Program for the International Assessment of Adult Competencies.  They just completed their second meeting of the project, and some materials are available to understand the nature and goals of this international project.  This assessment will focus on skills such as reading and writing, but it will also focus on problem-solving skills and technology.  It appears to be a very comprehensive approach to gauging skill and ability.  One of the issues they hope will be addressed by this new assessment is the nature of wage inequality, which I find absolutely intriguing.

Below is the write-up from the CAAL newsletter which has links to further information.

Has anyone heard about this assessment?  Have you been involved in some way with its development?  What are your thoughts regarding assessments that go beyond literacy and math skills and also gauge abilities like problem-solving?  Do you think that such an international assessment is useful/valuable?  

thanks! marie

 

The American Institutes for Research held a second invitational meeting in Washington, D.C. on March 13th to give a progress report and explain the key features of PIAAC, the Program for the International Assessment of Adult Competencies.  Representatives of several workforce development groups were attended.  Irwin Kirsch of Educational Testing Service (ETS is the lead contractor in this international effort) gave a slide presentation to promote better understanding.  The presentation is available in PDF from the CAAL website.

 

The Assessment findings will be released internationally this fall, along with an online resource that will enable organizations, states, and other entities to assess the competencies and skills of specific populations in real time.  CAAL's E-News of December 2012 provides details.  

 

Some interesting highlights of the March meeting follow:

  • The Departments of Education and Labor plan to work closely to make full use of the PIAAC results.  Important interconnections and communications are being developed between these two departments and other federal departments.  Education and Labor officials noted that the PIAAC data will be "a galvanizing force for federal planning and action" and "help us form a bold ambitious plan to transform adult education and learning."  
  • A PIAAC research conference will be held in D.C. in November, including a two-day tutorial on how to use the online self-assessment resource.  In the meantime, field testing of the online assessment tool will take place in various settings during June, and several focus group sessions are planned.
  • The PIAAC assessment design was complicated.  Among other things, through multi-stage adaptive testings and the use of "testlets," it was able to measure a much broader range of competencies and skills.  It could do automatic scoring in real time, and will generate trend data with ties back to the IALS of the 1990s and the ALL of 2002-03.  
  • PIAAC is the first study that has tested Reading across languages -- the PIAAC component specifically designed to provide a better understanding of people at the lower end of the competency/skill scale.  It will also be able to show how much literacy is needed for certain purposes, such as problem solving, with data provided in a way that will help programs build service interventions.   
  • PIAAC will make it possible to compare skills by educational levels, on a country-by-country basis, and will be "strong on what it means to be at a certain point on the scale."    
  • PIAAC's scope and results will be geared more to the private sector (e.g., recruiters and employers) than earlier assessments.    
  • The PIAAC may explain more about the nature of wage inequality in the U.S. and abroad than former tests.  For example, the U.S. probably has more temporary contracts for its employees, which may be one explanation for its high inequality ranking.

For information on the PIAAC background questionnaire, its history and design, and other aspects of this unprecedented international project (more than 22 countries are involved), visit the website of the National Center for Educational Statistics.

Comments

Thanks for posting this, Marie. I think the PIAAC will be very important source of information for us in adult education and training. The Department is very interested in making it widely understood and used. Watch the OVAE Blog for updates as we post new developments. Here is the post referencing the meeting Gail attended: http://www.ed.gov/edblogs/ovae/2013/03/18/piaac-a-once-in-a-decade-opportunity/  

Marie: In 1993 after the National Adult Literacy Survey came out the federal funding for the Adult Education and Literacy System (AELS) decreased for the next three years. In 2003 after the National Assessment of Adult Literacy (NAAL) came out, President Bush’s education secretary announced that the findings supported the President’s call for a $1 billion program for the high schools in America. Nothing more for the AELS was called for by the Bush administration and for the years after 2003 funding for the AELS declined, enrollments plunged from around 2.75 million to 1.85 million,  and the Even Start funding was later dropped and hundreds of programs for adults’ and children’s literacy disappeared. A few states used the NALS and NAAL to lobby for more funds for the AELS in their state. Some got additional funding, But now we hear of the loss of millions of state dollars for the AELS as states change their focus from adult basic education programs to early childhood programs.   So it seems to me that from a political perspective, very little good for the AELS has come from past assessments and little will come from the PIAAC. <?xml:namespace prefix = o />

 Technically, the PIAAC continues the same invalid methodologies as for the YALS, NALS, IALS, ALL, & IALSS (see article below). As for problem solving, there is zero evidence for a generic something-or-other called “problem solving” (IQ? “g”?)  In fact, all of the literacy assessments are forms of problem solving. And if you get up in the morning and put your shoes on your feet instead of your ears you have done problem solving. So it is very unlikely that much will come of the PIAAC problem solving work.

I have repeatedly asked people at workshops and speeches I have presented how many have heard of these national and international assessments and very few have, and almost no one has seen an item from these tests. So the assessments do not appear to have influenced educational practice in the AELS, either. But those who have not studied their history are doomed to repeat it (paraphrasing Santayana)! Remember the Army Alpha and Beta Tests of World War I? They measured innate intelligence!

 

Tom Sticht

 

151221 Maliteracy Practice in the Assessment of Adult Literacy

 

Published Online at Education News Wednesday  March 16th, 2011

 

Tom Sticht – “Maliteracy practice” is a term I have coined to refer to professional wrongdoing, intentionally or inadvertently, that results in injury or damage to people as a result of taking actions or making statements about their literacy competence.

 

Maliteracy testing practices leading to the defamation and gross misrepresentation of adult literacy competence in many nations began in the mid-1980s in the United States with the National Assessment of Educational Progress’ Literacy Profiles of America’s Young Adults (aged 21 through 25).

 

For short I call this the Young Adult Literacy Survey (YALS).

 

The YALS developed a methodology for assessing the performance of tasks involving complex information processing abilities of adults. The tasks all involved the use of printed materials to solve cognitive tasks ranging in difficulty from easier to more difficult, hence they were called measures of literacy (reading), though other unidentified and unspecified cognitive abilities were required to perform the tasks.

 

The YALS developed three separate scales of what was called literacy: prose literacy, document literacy, and quantitative literacy and each scale was used as a separate indicator of literacy ability in that area.

 

Unfortunately, this separation failed to indicate the adult’s competence when summed across all three scales. This maliteracy practice lead to an inaccurate representation of the adult’s overall literacy competence.

 

Another problem with the YALS lay in using a psychometric scale that required a person to be able to have an 80 percent probability of being able to perform tasks with a given difficulty score to be called proficient at that score level. The U. S. National Academy of Sciences, National Research Council (NAS/NRC ) later reported that this is too stringent a standard, and other experts noted that it produced four times the errors of saying a person could not do a task when in fact he or she could, than it did in producing errors of saying the person could do the task when in fact he or she could not. Because of the maliteracy practice of the use of this overly stringent standard, competent adults were more likely to falsely be called incompetent in literacy.

 

An additional maliteracy practice with the YALS was in denying competence above the person’s assigned score when in fact a person might be able to do more difficult tasks but with a lower probability than 80 percent. Like the competence that was ignored by not summing across the three scales, the competence disallowed for performing tasks above one’s assigned score was nonetheless a part of the person’s ability, but they did not receive credit for it. Hence adults were likely to be called less proficient in literacy than they actually were.

 

The methodology of the YALS was subsequently used in the 1993 National Adult Literacy Survey (NALS), the 1995-97 International Adult Literacy Survey (IALS), and the 2003 Adult Literacy and Life skills survey (ALL), and in Canada the International Adult Literacy and Skills Survey (IALSS). However, the NALS, IALS, ALL, and IALSS all used a new approach from that of the YALS for reporting scores. They divided the measurement scale (nominally ranging from 0 to 500 but mostly falling between 150 to 400) into five levels of proficiency: Level 1 (scale scores from 0 to 225), Level 2 (scale scores from 226 to 275) Level 3 (scale scores from 276 to 325) Level 4 (scores from 326 to 375(, and Level 5 (376-500).

 

This use of the 5 levels for categorizing people into literacy ability levels lead to the next major maliteracy practice with these assessments of adult literacy, the statement in the IALS 1997 report that “Level 3 is regarded by many experts as the minimum level of competence needed to cope adequately with the complex demands of everyday life and work…”. However, in a report from the U. S. National Academy of Sciences, National Research Council (NAS/NRC), the NAS/NRC panel stated that the methodology of the NALS did not provide information about the “mismatch” of skills of adults and demands of the economy, or anything else for that matter, but that “unsupported inferences ” along these lines were made by some.

 

Unfortunately, all of the national and international reports cited above continued this practice of drawing “unsupported inferences” about the levels of literacy needed by adults to meet the demands of contemporary societies in the industrialized world.

 

Though no empirical evidence has ever been produced to support this statement, and no experts have been identified to verify the statement, it is frequently cited by those concerned with adult literacy education. For instance, in a 2008 report (www.ccl-cca.ca), entitled Reading the Future:

 

Planning to meet Canada’s future literacy needs, the Canadian Council on Learning projected that by the year 2031 more than 15 million adults “will continue to have low literacy skills below IALSS Level 3, or the internationally-accepted level of literacy required to cope in a modern society.”

 

In an ironic twist, the faulty methodology developed in the United States and exported for use in the international adult literacy surveys was rejected and changed in conducting the 2003 National Assessment of Adult Literacy (NAAL) in the United States. Following the findings of the NAS/NRC panel report about the problems produced by using the 80 percent standard for assigning competence at a given level, the NAAL changed the 80 percent standard to a 67 percent standard, and instead of using the five levels of literacy used in the previous assessments, it used four levels called Below Basic, Basic, Intermediate, and Proficient.

 

Still, the NAAL repeated the maliteracy practices identified above in not summing across the scales and not giving credit for being able to perform tasks with less than a 67 percent chance of being correct, which clearly leads to an underestimation of the literacy competence of adults in the national and international surveys.

 

The problem of the underestimation of competence was also indicated by the fact that large percentages of those adults declared to be unable to cope adequately in the knowledge society of contemporary times because of their low literacy scores were, in fact, employed and many earned well above the earnings of many of those in literacy Level 3 or higher on the international surveys.

 

A final problem of note in carrying out the national or international assessments of adult literacy, with whatever other cognitive abilities may also be involved, is that the assessments did not recognize the special bodies of knowledge that adults may develop in their jobs, hobbies, community service or other life activities. A person may be interested in a very specialized area, such as collecting insects, and develop a high level of knowledge and be quite literate in that special area, and his or her performance on broad, general assessments of literacy may not capture such literacy ability.

 

Given the many maliteracy practices in the assessments of adult literacy, and the great harm they have done in defaming the workforces of many nations, it is clear that new approaches are called for in characterizing the cognitive abilities of a nations’s adults. At the present time, the national and international surveys have produced characterizations of the skills of adults that governments apparently do not actually believe are indicative of a major problem with adult literacy. This is indicated by the marginalized nature, both in the funding and in the lack of professionalization of teachers/tutors in the field, of adult literacy education in all of the nations involved in such assessments.

 

It is also indicated by the adults themselves, some 95 percent of whom think they have little or no problems with their literacy and do not enroll in programs. If adults do not think they have a problem, and governments do not think the problem is serious enough to take strong actions in support of adult literacy education, why are we continuing to invest millions of dollars in these assessments?

 

Stay tuned for the results of the Program for the International Assessment of Adult Competencies (PIAAC) coming in 2011! [Note: as it turns out, the PIAAC did not come out in 2011]

 

tsticht@aznet.net