In the last few weeks, we have been asking a series of questions attempting to better understand the complaints against the current GATE/AIM system and potential ways forward that could gain both community and board support.
Recently we had a chance to walk through some of the problems of GATE/AIM identification. Here we pull from the PowerPoint presentation by Scott Carrell and his colleagues.
There are supposed to be three ways that students qualify for AIM. First, through universal testing – “if a student scores in the 96th percentile on their total score AND on either their verbal or nonverbal score on the OLSAT.” Second, through retesting – if they are within five standard errors of the 96th percentile they automatically qualify for retesting. Third, by private testing where a student can qualify by taking a test of mental reasoning administered by a licensed psychologist.
The problem is that the district is not enforcing the rule that retesting for the TONI occur with students within five standard errors of the 96th percentile. Instead, as we see below, there are students qualifying for AIM through retesting who scored well below the 80th percentile and, as we see, that number is rapidly increasing.
This series of graphs show that, increasingly, the AIM population is creating a pool of students of mixed ability in the AIM classroom.
This takes us back to the point raised last week by Jann Murray-Garcia, who back in 2002-03 raised the alarm that “no African-American or Latino third-graders in the entire Davis school district were recommended by teachers to sit for the GATE test.” The OLSAT (Otis-Lennon School Ability Test), then as now, was part of the problem. Back in 2005 it yielded few African-American or Latino students and now, as we have seen, it continues to identify heavily for whites and Asians.
As a result, the district instituted the TONI (Test of Nonverbal Intelligence) to address racial inequity and bias. And it worked. By 2005, the TONI increased the black and Latino population “dramatically,” with percentages basically matching the proportion of African-American and Latino students in the district.
But what has happened over time is fewer and fewer students were identified through OLSAT and more through TONI and private testing. So, by 2011-2013, 24 percent of identified students were identified through the OLSAT. But 27 percent were identified through private testing and a whopping 49 percent through TONI.
Tobin White noted that just 3 percent of students score at the 99th-percentile through the OLSAT, while 28 percent do so through TONI. He found that students administered the TONI were six times more likely to qualify than those taking only the OLSAT and nine times more likely to score in the 99th percentile.
Not only that but the justification for using the TONI seems questionable. As noted above, the district guidelines suggest only retesting those who are relatively close to qualifying anyway – but that clearly has not been enforced as we see with the graphics above.
Moreover, the TONI is justified based on a number of risk factors, some of which include learning disabilities or low SES (socioeconmic status) scores, but that does not appear enforced either. Three hundred thirty-one of 492 retested through TONI had no risk factors at all, calling into question how we are identifying them for re-testing.
It seems relatively easy to identify problems with the current identification system, but it seems much more difficult to fix it.
Recall that the reason that the district went to the TONI in the first place was that teachers were not recommending African-American and Latino students to GATE, and OLSAT was not identifying many for the GATE program.
If we remained with OLSAT alone – something I don’t think anyone is actually recommending – we would end up with a white-Asian AIM program, with 92 percent of the students, while only 7 percent were Latino or black. TONI, on the other hand, is identifying 54 percent white-Asian, with 31 percent Hispanic and 8 percent black.
Clearly, the district needs to have multiple indicators, one of which needs to be a test that can assess student non-verbally in order to override potential learning disabilities and low SES limitations of language skills – the question is really, how do they do that?
Creating a fair identification system is vital for being able to continue the program. And that will undoubtedly be a huge challenge going forward.
—David M. Greenwald reporting