While I have been very concerned about the process in which the school board has addressed the GATE/AIM program, I have not taken a position overall on the GATE program. Frankly, I am not an expert here, and the questions seem many, with the issues complex.
The message that seemed to be delivered on Thursday night was that the system is going to change. And I think this gets to the core – why are we changing the GATE/AIM program? Is it not working?
That is where things get tricky. There a bunch of different things we need to be looking at just to ask that simple question.
UC Davis Researcher Scott Carrell led a team of researchers to address the question, “How Does the AIM Program Affect Student Outcomes in the Davis Joint Unified School District?”
They conclude among other things, “Identification into the program has been both inconsistent and variable over time. The number of students who have qualified for AIM through universal testing has decreased while the number of students who have qualified through private testing and retesting has increased.”
There has been a drop, they found, in the average and minimum OLSAT (Otis-Lennon School Ability Test) scores and “[t]hese changes have resulted in a lack of transparency about who is truly ‘fit’ for the program.”
One key thing that the district needs to figure out is whether GATE/AIM is to be an honors program or a program for gifted students who are not having their needs met otherwise. Answering this question is at the core of the problem here.
The report continues, “We also find no evidence that the program positively affects achievement scores of participating students and no evidence that the program negatively affects nonparticipating students.”
The authors argue, “This ‘no benefit, no harm’ finding should be considered within the context of the program’s cost: the DJUSD spends considerable resources on universal testing and retesting (as do private citizens for additional testing) and given the financial costs and capacity constraints associated with this program, we should expect these costs to be balanced by some measurable benefit.”
On the other hand, advocates of the program would counter that, in actuality, the GATE program costs less than $20 per student of additional funding to run. So perhaps the resource issue is overstated and can be streamlined fairly easily.
For any student, the statistical analysis is somewhat fuzzy. It is susceptible to using the correct statistical model to estimate the relationship between AIM and performance, and at least one commenter on Thursday questioned the regression model used to create the “no statistically significant” effect.
But there are all sorts of problems that the researchers much grapple with here. As they acknowledge, “Comparing the average outcomes of students in AIM self-contained classrooms with students not in AIM self-contained classrooms does not provide a good estimate of the causal effect of AIM.”
The researchers here can also not implement a controlled experiment “to disentangle the actual effect of a program from the effect of different types of individuals choosing to participate or not participate in a program.”
They ultimately settle on a regression discontinuity research design: “However, because of how students qualify for AIM in the DJUSD, certain parts of the qualification process are effectively random. It is this randomization that we exploit to estimate the effect of the AIM program on students in the program. This approach is called a regression discontinuity research design (RDD).”
As a former social scientist, I can say that, during my research, we often had to get creative in designing ways to analyze data for which there were not good controls. However, it is one thing to advance an understanding of a field using such methods – it is another thing to use a “quasi-experimental method” as an instrument of public policy.
Here they attempt to compare those students who just miss the threshold with those who just make the threshold.
In the end they find, “Students just to the right of the qualification threshold score, on average, 3 points higher on their ELA CST [English-language Arts California Standards Tests] and 6 points lower on their math CST than students just to the left of the qualification threshold. These effects are small and statistically indistinguishable from zero. As such, there is no evidence that the AIM program has an effect on students in the AIM program.”
But there are problems with this approach. First of all, should we expect GATE/AIM to have a profound effect on measurable achievement? Second, should it be immediate? Third, couldn’t these results suggest that the program is working rather than not working?
The first problem here is that you have the assumption that the OLSAT scores will be improved by the program. Or to put it another way, what is the evidence that the OLSAT is enough of a finely-tuned measurement device to base our assessment of the GATE program’s success or failure?
Along with that is the question as to whether we should expect to see marked improvement within a given year.
Finally, there is the problem that in the ideal world, where you are identifying kids who are good fits for the GATE/AIM program, they should be helped by that program. On the other hand, kids who are not GATE/AIM program qualifiers should better fit in their traditional classes.
If that is not happening, if GATE simply is a better program that would benefit all students, then we need to incorporate the program for all students.
And that is the whole problem I have with this entire debate. Some are arguing that we are over-identifying GATE/AIM students as though the status itself were some sort of prize rather than a program meant to help those who need it.
As such, I think the entire premise of the study is very flawed. The real question we should be asking is does the AIM program help those who enroll in it? Who does it help? Who is better helped by a more traditional program?
After all, if 75 percent of Davis students are helped by going to the GATE program, the question should be: why aren’t we doing it for 75 percent of the students?
When I listen to Frank Fox, a retired educator, I find he makes a lot of sense: “We live in 2015, why aren’t we now working on an individual plan for each and every kid that’s in the district?” Are there ways for us to create individual programs for each and every student? Should [that] be the goal?
On other hand, I hear comments about things like “segregation” and “bullying,” and I have to question those terms. As several parents put it, GATE has become some sort of a status symbol where the GATE-identified students use it against the non-GATE identified students.
I don’t see that as a problem with the GATE program. That is typical behavior for children who will use differences as a means to bully other students ,often as a crutch for their own insecurities.
The objection to me is that we are again using GATE/AIM as a status symbol – part of that stems from the name “gifted and talented education.”
Against that backdrop are the calls for differentiated instruction.
As I read the California Department of Education’s guidelines on differentiated instruction, it notes that cognitive research results “suggest that a one-size-fits-all approach to classroom teaching is ineffective for most students and even harmful for some.”
The principles that point clearly to the need for differentiated instruction are listed as follows:
- Need for emotional safety. Learning environments must feel emotionally safe to students for the most effective learning to take place.
- Need for appropriate challenges. Students require appropriate levels of challenge. When students are confronted with content and performance standards well beyond their level of readiness, intense stress frequently results… A one-size-fits-all approach to teaching produces lessons pitched at a single-challenge level, virtually ensuring that many students will be overchallenged or underchallenged. Neither group will learn effectively. Research supports the conviction that all students should strive to meet the same content and performance standards, although many will do so at different levels of acceptable proficiency.
- Need for self-constructed meaning. Students need opportunities to develop their own meaning as new knowledge and skills are encountered. They have different learning styles, process ideas and concepts differently, have varied backgrounds and experiences, and express themselves differently. All must be helped to assimilate new knowledge and skills within the framework of prior personal experiences.
The real question, then, isn’t whether we have differentiated instruction, but how we structure it. Those opposing the self-contained GATE program seem to believe that students would benefit most from differentiated instruction within a single classroom.
But I recall someone from my youth – a person very bright and gifted but not a high achiever, largely because the in-classroom material was non-challenging and uninteresting. The person was quickly ostracized for being different and sought emotional safety by playing down to the masses rather than excelling. While this individual eventually would overcome these challenges, much of public schooling was a waste of time for this person.
These are not simple issues and, while the system appears to be working in some respects, it becomes very clear that testing and identification needs to be improved. If students who would benefit from AIM are excluded from AIM, then I think we need to re-think what AIM should be.
On the other hand, it is very clear to me that there is a class of students who would benefit from being able to progress at their own pace and in the safety of a self-contained program. The challenge for the district is to properly identify those students and make sure that the program serve the needs of all.
—David M. Greenwald reporting