Posted on 09/03/2014 10:53:17 AM PDT by MichCapCon
If the Michigan Department of Education were to take the socioeconomic status of students into consideration when ranking school performance, it would find that charter public schools outperform conventional public schools on the state's Top-to-Bottom list, according to an analysis done by Audrey Spalding, education policy director for the Mackinac Center for Public Policy.
Spaldings analysis found that if there are two schools of the same grade levels where both have the same percentage of students eligible for a free lunch one a charter public school and the other a conventional public school the charter would, on average, be ranked 5 percentage points higher on the states rankings.
The statistical analysis adjusts test scores based on poverty levels, or in this instance the number of students eligible for a free lunch.
Charter schools in Michigan have significantly more students who are eligible for free lunch than conventional public schools. Some 66.4 percent of charter schools students are eligible for free lunches, while 39.2 percent of conventional school students qualify for free lunches.
"It is interesting that the Michigan Department of Education is working to crack down on charter schools, despite the fact that they outperform conventional schools on the department's own ranking when student background is considered," Spalding said.
The Mackinac Center for Public Policy included socioeconomic status of students in its context and performance report cards for all public high schools and elementary/middle schools starting in 2012. Later, Bridge Magazine and Public Sector Consultants released public school rankings also based in part on socioeconomic status of students.
Jack Schneider, assistant professor at College of the Holy Cross, said test scores are a much more accurate measure of race and family income than they are of what students learned in school.
If that's the case, than adjusting scores based on poverty level would be more fair when evaluating schools and teachers. However, the Michigan Department of Education has resisted including socioeconomic status of students in its rankings.
In addition to this analysis of the state's TTB rankings, more rigorous research shows that charter public schools perform better in Michigan.
Stanford Universitys Center for Research on Education Outcomes (CREDO) released a report in January 2013 that found the average Michigan charter school student outperformed conventional school peers on 52 of 56 outcomes tested. Charter school students received what amounted to an additional two months of learning in reading and math over an academic year compared to their conventional school peers.
Gotta love one public school comparing itself against another.
My daughter works for a charter school. The public school teachers unions are working triple-overtime to try and shut them down. Just can’t stand the thought of one public dollar being drained away from their kitty.
They are attacking them with gutter-slime false allegations, and inspiring the usual Democrat weasel suspects to launch witch hunts.
The previous California state testing system was doing exactly this, rating schools by “similar schools” rankings based on all available demographic information. The formula was a result of a complex regression model.
Worked very well.
Browns Democrat administration seems to have done away with it.
Well, I lied. The California test data is still indeed available (2013 most current year) including the similar schools rankings.
I may go try a re-analysis of this.
“... [CHARTER SCHOOLS] despite the fact that they outperform conventional schools on the department’s own ranking when student background is considered,” Spalding said.
Not understanding what the intent is on “student background”. Sounds mysteriously like a naturally shifted bell curve using “student background” as a component.
Means they’re hiding something grossly out of proportion.
Its quite straightforward.
Most performance variation among students is indeed entirely due to the nature of the student. If a school has students that are programmed by family and tradition to be excellent (100% Chinese say) it takes extremely bad teaching indeed for them to produce bad test scores.
If one wants to rate schools, in terms of what value they add to educational results, you have to equalize for the students. You will otherwise end up rating a stupidly run school employing an entirely drunken crew of ignorant teachers, but 100% Chinese students, above a tight ship operated by a totally disciplined crew, with 100% black children of single parents whose siblings are all gangsters.
The California system was to assign schools to a batch of “Similar Schools”, based on a demographic regression model that tried (quite well I thought) to extract all the student-based performance factors.
The California testing data is still excellent.
If anyone wants to do a charter/non-charter straight up analysis this is I think the worlds best database -
http://www.cde.ca.gov/ta/ac/ap/apidatafiles.asp
This is for all California public schools, of which there are more than any US state, and they educate statistically significant numbers of every race, color and creed of mankind.
All data available for download, all are flat files. School test results are the records, and also school demographics and a host of other info. Schools are flagged by type including charters vs non charters. I strongly recommend using the “Similar Schools” ratings.
It doesn’t make sense to me yet.
What is being compared (exactly), then how is it “normalized”.
What is the reference? I get suspicious when a reference isn’t needed to grade.
What California does is it crunches all the data in all schools, such as racial proportions (% white, % black, etc.), % qualifying for school lunch, % mobile (in school for 1 year o less) and 20-30 other data points. It uses a complex regression model (I can lecture for three days on this subject, unfortunately, after which you will pass AP stat and go on to work at the RAND Corp, or you will kill me in a rage) to assign weights to these factors insofar as they determine test scores - because they do, and how.
Based on this model they can say that a school belongs within a series of a dozen or so bands. Within a band the schools are considered to be dealing with a student population that is similarly difficult to teach. Simplified (way too simplified) example would be School 1 is 50% White and 50% black and 30% Free lunch, while School 2 is 40% Asian 60% Hispanic and 50% Free lunch (just BS examples off the top of my head). The model would consider them similar schools and their test scores are properly comparable. It is correct to say that if School 1 has an API (the “grade” for schools) of 700 and School 2 has an API of 600, School 1 is doing much better than School 2. This is further crunched into a similar school ranking, from 1-9. 9 is good, it means a school is doing very well given the material it has to work with.
School 3, Which has a population of 50% White and 50% Asian with school lunch 10% is NOT a similar school. Their API may be 850 but they cant be compared with School 1 or School 2, because frankly their kids would take the test and give the school an 850 API even if all the teachers spent the school year in a coma.
They are compared with THEIR similar schools, like School 4, 30% white 70% Asian 20% School Lunch which may have a 950 API. In which case it becomes clear that with just an 850 API, with the kids they have been given, they are really doing very badly. They would get a Similar Schools rank of 1, and the school board should get on their case.
” ... it means a school is doing very well given the material it has to work with.”
Doing well means the faculty, material it has to work with are the students.
The students are profiled not on their aptitude but on their demographic, while not necessarily wrong, it is wrong to set expectations with this, rather than as a tool for improvement.
Looks like somebody came up with a muli-variable equation that works over a huge sample of students with a tolerable +/- E. That predicts statistical distribution (pareto) of performances within a given clump.
So stupid people are compared to other stupid people, and we can point out the brightest of the stupid. But they’re not compared to smart people because that would be racist. Furthermore we can feel good that we did a good job because our clump of stupid students performed as predicted. So where good.
Good grief.
This is the ‘soft bigotry’ GWB introduced us to.
Well, yes unfortunately that is reality.
A school really is its students much more than its teachers.
No set of teachers are going to overcome all the personal defects of a given set of students. The difference between the best and the worst possible sets of teachers would make a marginal difference, that is apparent in the data. Nowhere in California, among thousands of schools, is there a magic solution that substantially overcomes student factors. That tells me there isn’t one in the US or the world.
And there is no way to identify aptitude other than demographics. There is a limit to the set of data available for this sort of thing, some institutional and some conceptual. We are not permitted to use IQ sores for this, though it would be a huge factor. And what number can we attach to the different levels of drive, ambition and diligence in harassment to finish homework that distinguish parents ? There is no number to attach to that.
This is a tool for improvement, done right, even with its imperfections.
With this one can identify superior management, superior techniques, superior curricula, and again, if done right, replicate those to those places that are not doing well and raise the level across the system.
We are not pointing out the brightest of the stupid, we are pointing out, hopefully, the best taught of the stupid, because the others exactly like them aren’t doing as well.
This HAS to be done in order to identify better techniques, management, etc., otherwise it would be buried in the student driven performance noise.
This is a proper engineering approach to the situation, in order to produce a gradual improvement, and I cant imagine a different one for this sort of problem.
The biggest failure here, in California and elsewhere, is that there is little institutional will to implement policies drawn from the data, such as curriculum and teaching techniques.
Disclaimer: Opinions posted on Free Republic are those of the individual posters and do not necessarily represent the opinion of Free Republic or its management. All materials posted herein are protected by copyright law and the exemption for fair use of copyrighted works.