Over the summer, the Vanguard has worked hard to continue to drill down a bit deeper into the discussion on the AIM program. One of the critical questions has been how the district currently selects AIM students – and, despite all of our conversations, we may still not be at the bottom of that question and how could it do it better.
Based on some recommendations by those with various positions on the program, we will look at some articles that address some of these issues. We begin with a 2011 article from Frank Worrell and Jesse Erwin, researchers at UC Berkeley in 2011. They point out that one of the problems with creating a perfect identification tool to make the classification of children as gifted or not gifted is a lack of agreement of just what “gifted” means (and as we have pointed out previously, gifted might not be the best word).
They cite “growing awareness that (a) giftedness involves more than just ability or potential in a domain, and (b) the classification of giftedness may shift across developmental periods.”
They note, after laying out a number of models of giftedness, that none of them “have emerged as dominant based on empirical evidence.”
However, they offer their own starting place: “Identification for GATE programs should be based on evidence of outstanding achievement or the potential for outstanding achievement in any academic domain relative to peers of the same age or grade level. Although identifying students as gifted is often constrained by the availability of funding, the goal should be to serve the largest number possible, given both errors of measurement and our inability to predict who will continue to qualify for the designation, gifted.”
As we have noted, “Traditionally, programs for gifted students can be classified as either enrichment programs or acceleration programs, and these programs serve different goals.”
They write, “In an acceleration program, a student who is identified as gifted is allowed to move through the curriculum at a faster pace than his peers with the goal of completing a particular course of study more quickly.” On the other hand, the enrichment program is a more frequently used alternative in public schools. It “has the goal of allowing students to examine concepts in a domain in greater depth or at an earlier age than they might in a typical classroom.”
There are other schools that “combine enrichment and acceleration approaches with students moving through some courses at a faster pace than they would in a regular high school while also engaging in some aspects of a discipline in greater depth.”
They write, “GATE programs should involve appropriate education that is individualized for the student in much the same way that special education is supposed to be.”
Still they highlight the key question facing us: “Is the goal of [the identification process] to examine and serve students who have the potential for giftedness, or is the goal to select those students who are already clearly more advanced than their peers?”
The authors then outline a screening process – “The goal of screening in the identification process is to canvass the school population for students who are demonstrating outstanding achievement or may have the potential to be outstanding achievers.” These processes include: nomination forms, standardized tests, academic work, and interest inventories.
The authors write, “Once data from screening measures are available, district policy should determine the percentage or number of students that will move on to more rigorous assessment procedures.”
They note traditional cognitive measures, for which DJUSD uses the OLSAT (Otis-Lennon School Ability Test). The danger is, of course, the threat that such tests are culturally biased. The authors therefore write, “Nonverbal measures of cognitive ability have garnered attention as they have been described as culture-fair and thus more equitable for culturally or linguistically diverse populations.”
“The logic driving the use of these tests is that linguistic, cultural, or economic obstacles have hindered certain students from demonstrating giftedness and that by reducing verbal demands, nonverbal test scores are more valid indicators of cognitive ability,” they write. But they add that “the claim that nonverbal tests are fairer for ethnic and linguistic minority children has been challenged on several grounds.”
They note that “to the extent that the achievement gap still exists, a test that does not manifest the gap is necessarily less predictive of academic achievement (i.e., lower in predictive validity) than one that does.” They argue, while these tests are useful in assessing the cognitive abilities of English language learners, “it is impossible to measure verbal, quantitative, or spatial reasoning skills without recourse to verbal, quantitative, or spatial concepts or knowledge.” They also find that there are no long-term studies that show the “predictive validity of nonverbal measures for gifted students and thus no evidence supporting their exclusive use in identifying giftedness.”
In short, they recommend that, while the use of nonverbal tests is appealing, “school psychologists should supplement these scores with additional information about a student’s functioning.”
My understanding is that DJUSD used to do something like that until 2008, when it let go its GATE counselor due to budget issues. That test then fell to the GATE coordinator. To the extent the district continues GATE, it should consider reinstating the counselor position.
The authors then create a proposed checklist for identifying students:
- What is the goal of the gifted program in your school or district? Is it enrichment or acceleration and on what basis is this decision made?
- What academic and other domains (e.g., mathematics, language arts, science) is the program planning to target?
- What domain-specific skills are important to measure alongside measures of general intelligence?
- Are the individuals being identified likely to have well-developed skills in the domain or do preskills need to be assessed?
- Is the level of exposure to the domain likely to vary widely among individuals being assessed on the basis of background variables such as socioeconomic status or first language? If yes, how are these concerns addressed in your identification protocol?
- Are screening and selection data being collected from multiple sources?
- Do identification data include information on interests, motivation, self-regulation, and willingness to engage in effortful learning?
- Are data from raters (e.g., teachers, parents [not students]) low-inference or behavioral in nature and collected using instruments that are less likely to be susceptible to halo effects?
- What is the nature of the reliability and validity evidence in support of scores being used for identification?
- Are local norms available for use as part of the identification procedures?
- Who is providing the instruction and are they certified for teaching gifted students?
- Is the curriculum in keeping with the curriculum standards proposed by NAGC [National Association for Gifted Children] and CEC [Council for Exceptional Children]?
- Will the curriculum allow students from all backgrounds to see the contributions of individuals like themselves?
- Will the program be sensitive to perceived belonging on the part of students from traditionally underrepresented groups?
- How will the effectiveness of the program be evaluated across all of these domains?
In coming columns, we will evaluate some other research, but the article seemed to be comprehensive. I do want to stress that most of these columns are for the purpose of generating discussion and hoping to advance an understanding of the rather complex issues involved.
—David M. Greenwald reporting