Did you recently take on (or consider) a loan of 84 months or longer on a car purchase?
A reporter would like to speak with you about your experience; please reach out to PR@Edmunds.com by 7/25 for details.
A reporter would like to speak with you about your experience; please reach out to PR@Edmunds.com by 7/25 for details.
Options
Comments
Suppose the pool of readers of Road & Track, for example, was comprised of owners of performance-oriented cars that was not propertionally representative of the public as a whole... would that be biased? Of course.
Therefore, I continue to view the CR data more as a reflection of the views of its own readership.
TagMan
That will result in more Camry responses, for example, but how will that 'bias' the, say, Malibu responses? They're not taking votes on what cars the readers like, they're asking about repair histories.
Let me sum it up this way, before I depart this thread... The very nature of a skewed non-representative pool is inherently biased, IMO.
TagMan
However, that many seem to focus exclusively on reliabiltity is probably CRs fault, in part. They really should do more to clarify what the real difference is between "good", "average", and "below average" reliability.
And as discussed before they never will. Why not? Because if they published the actual data we would most likely find, as we do with other industry surveys, that there is not a heck of a lot of difference anymore. JD Power, whether you give credence to it, shows that at 3 years the vehicles do not have much of a difference.
Yes, between Buick(1.5 problems per car) at the top and Land Rover(4 problems per car) at the bottom there is a good difference (~2.5 problems per car) but between toyota(1.8), Honda(1.7) and GM (~2.2) they are all right near each other. The difference of .4 problems per car is not a very big number and they get closer to each other each year.
Why subscribe to a magazine that tells you that all the volume brands are all about the same in reliability??
http://www.jdpower.com/corporate/news/releases/pdf/2007130.pdf
For example in the April 2006 auto issue there is a chart of "problems" per 100 vehicles by make over time. At the 3 year point, which is what JD Powers depedability data is based on, CR shows Honda and Toyota at 25 and average (Ford) is at 50. Despite being a factor of 2, I would not call this a big difference either.
In another place in that issue, you can see a relative ranking which shows most makes being pretty close. Toyota is 50% above average, while VW is 30% below...again I don't see this as a big difference.
People also miss the fact that the consistent high reliability has been restricted to Honda and Toyota. Many mistakenly think this has applied to all Japanese makes
I'm not even a subscriber, yet I am aware of all this, as well as the recent praise they have given Ford for their improving reliability.
The data from CR surveys, therefore, is representative of CR's survery participants only. It's quite simple. They do not represent the rest of the population as a whole.
Yet, consider that the survey data from CR is highly influential!! How interesting.
Biased? Of course... just as any other limited or focused sampling would be. It's the nature of the beast, an inherent attribute of the process.
Exactly. Consumer Reports has a very high readership, especially when it comes to shopping for cars. For this reason, their survey will be the most scientific of any consumer survey, including their surveys on reliability and repair records for cars. That's why Consumer Reports is so influential -- it's the surveys turned in by the product owner the consumer is paying attention to, not Consumer Report's highly biased and anti-American, anti-domestic road tests.
I have always felt that Consumer Report's "expert" road tests are biased; not the reliability and repair data extrapolated by their customer base.
As far as the vehicle is concerned, Consumer Reports is good for one thing and one thing only: their reliability and repair surveys that are submitted by their readers. Their highly biased "expert" automotive tests on the other hand can go to H-E-double hockeysticks as far as we're concerned.
Some would argue that the spaceship interior of the new Civic makes it a worse car than the Civic it replaced, since the former Civic had exemplary ergonomics whereas the Aliens who redesigned the new Civic made the new Civic interior ergonomic only for their extremely long arms, 3.6 fingers, enormous alien heads and googly-eyes.
Consumer Reports has a readership that is high income (Over $100k), largely on the two coasts, largely white and affluent. Am I surprised that a survey of that demographic prefers Hondas and Toyotas. Absolutely not. It doesn't bother me one iota. In fact, insofar as it depresses the value of domestic sedans, it really benefits my family when it comes time to pick up reasonably priced transportation. I have made a career of selecting lower-rated vehicles and driving them to the end of life cycle with few major problems.
What I think is funny is how people translate the results of these surveys into "gospel truth" - that a vehicle is good or bad just because CR says it is so. It reminds me of a coworker who always drives a newer $45k Lexus with all the bells and whistles. He asks me to locate a $3k older vehicle for his kid who will be driving later this year. I find him a Ford Escort, low mileage is very good conditioned - the typical "Grandma to church and the bank" car. He takes a ride in it and says this car is junk because it "doesn't ride liek the Lexus." No kidding and McDonald's doesn't match up with Charlie Trotter's.
When I need input on a vehicle, I generally invest in the latest Phil Edmundston Lemon Aid Guides and the latest guide from the APA in Canada as they have been the most reliable guides to the problems found in used vehicles.
In this:
Texases is correct, of course. There would be no import or domestic bias in the answer to, "How often if at all did your turn signals fail to operate correctly?"
Again, Consumer Reports reliability data should be relied upon, with exception to vehicle satasfaction (the "So just how satasfied are you with your new Honda" question).
The Vehicle Satasfaction part of the survey can't be relied upon, since some people will make a car purchased based soley on political grounds. I believe the Toyota Prius to be a car that is purchased less for the gas saved from its hybrid technology and more for the political, GreenPeace statement it makes. For this reason and this reason alone, the Prius owner will rate the Prius the highest, and he does. Just look at the Vehicle Satasfaction result for that car if you do not believe me.
A very savvy shopper, gentlemen.
Someone once asked Marilyn Vos Savant the significance of scholastic test score results and college entrance test results. She replied by saying, "I don't put much faith in a slightly lower or a slightly higher score on those tests. The only scores that should matter and that should draw attention would be those scores at the very low or very high end of the curve."
Good point. Once schools limit themselves to the top X% of scores for admissions, they find little correlation between academic success and scores for those admitted, because the tests are an approximation, with a wide magin of uncertainty. Just like fine differences in CR scores.
Consumer Reports has a readership that is high income (Over $100k), largely on the two coasts, largely white and affluent. Am I surprised that a survey of that demographic prefers Hondas and Toyotas. Absolutely not. It doesn't bother me one iota. In fact, insofar as it depresses the value of domestic sedans, it really benefits my family when it comes time to pick up reasonably priced transportation. I have made a career of selecting lower-rated vehicles and driving them to the end of life cycle with few major problems.
What I think is funny is how people translate the results of these surveys into "gospel truth" - that a vehicle is good or bad just because CR says it is so. It reminds me of a coworker who always drives a newer $45k Lexus with all the bells and whistles. He asks me to locate a $3k older vehicle for his kid who will be driving later this year. I find him a Ford Escort, low mileage is very good conditioned - the typical "Grandma to church and the bank" car. He takes a ride in it and says this car is junk because it "doesn't ride liek the Lexus." No kidding and McDonald's doesn't match up with Charlie Trotter's.
When I need input on a vehicle, I generally invest in the latest Phil Edmundston Lemon Aid Guides and the latest guide from the APA in Canada as they have been the most reliable guides to the problems found in used vehicles.
This is only half-true.
There is nothing in Consumer Reports reliability data that indicates a preference towards the Honda or the Toyota; it's just reliability data sent in by people who answered a questionnaire, that's it.
Secondly, I question your claim of Consumer Reports reader base ("$100,000+ affluent whites"). I don't know for sure, but it's not mentioned in any of their publications. And even if it were so, we'd expect to see fewer reader surveys covering Honda and Toyota and more reader surveys covering BMW, Mercedes Benz and Lexus.
You're correct in picking up a great value in the domestic sedan because it's resale value is so low. Just look at the Buick LaCrosse -- an excellent sedan that can be bought with very little money.
You're incorrect in confusing Consumer Report's reliability and repair records with Consumer Reports "expert" tests. The former is from an unbiased questionnaire sent to car owners; the latter is a vehicle test from a very biased media corporation.
Do you have a support for that. I'd love to know their demographic description.
I have my own opinion about who is still reading CR and who was reading it 25 years ago.
2014 Malibu 2LT, 2015 Cruze 2LT,
Nonsense... when the sample pool itself is already biased in favor of those cars, the result is no surprise.
It's always interesting to see the statement regarding a newer model that suggests "based upon previous survey results by this manufacturer, we EXPECT this new model to be above average"... only to find out later that it wasn't above average afterall... just a little more bias sneaking in... using OTHER cars as an indication. Logical, perhaps, but presumptuous and not at all factual.
I'm not suggesting there is no merit in the data, only that it be taken for what it really is, that's all.
Also, over the years, the difference in reliability between many models has gotten so small that a vehicle at the midway point is really barely less reliable than one at the top, unlike the major gaps that existed between different models years ago. This small difference is presented to be more significant than it truly is... and that is distorted (biased) as well.
TagMan
Also, any time CR or anyone claims to make predictions about future reliability, they are bound to be wrong every so often. When CR says "based upon past reliability, we expect future reliability to be X", they are basing this on the track record of the model and the manufacturer.
Note how, in their recent Midsize SUV issue, they specifically DIDN'T give a reliability rating to the Toyota Highlander, due to inconsistencies in Toyota's quality as of late. On the other hand, they DID give the Ford Taurus X an average reliability rating based upon its past track record, and the improving quality of Ford's offerings. So much for their bias against domestics, huh?
Now regarding their Road Test rankings, I believe that their written reviews are usually pretty accurate assessments. However, their numerical road test ratings and rankings tend to baffle me. They do not reveal how they come up with these ratings, and they don't explain how various aspects of the vehicle are weighted.
Obviously, every reviewer is going to have their own preferences as far as what features that they value over others. Some reviewers are going to value handling and performance, while others are going to value safety and comfort. However, with CR you really don't know where they stand when it comes to what features they value most. This is an important piece of information when you evaluate the rankings of any publication.
For instance, I know that when I read a review in a car enthusiast magazine, I expect that they are going to give a higher rating to a car that is fast and taut. Meanwhile, when I read a review in a parenting magazine, they are going to give more weighting to kid-friendly features and safety. With CR, I really don't know WHAT criteria they are using.
In a sense, by definition, all publications' road tests are biased. By biased I don't mean that they have an axe to grind. What I mean is that they have their own preferences and criteria on how the are judging the vehicle. In that respect CR is no better or worse than everyone else.
Like I said, I find that their write-ups are usually pretty accurate for the most part. I've read the CR review for most of the vehicles that I have owned or driven, and I can't find much fault with any of them. However, I don't really give much credence to their numerical rankings, since they do not reveal how they are derived.
I had no intention of suggesting that there is no legitimate value to the data derived from CR's reliability surveys. There is. The data is valuable and highly influential as well. The bias I talked about is not enough to throw away the baby with the bath water, and I hope that was not the impression I gave. You are right when you suggest that it is easier to find bias in tests than in surveys... but that shouldn't mean it doesn't exist in surveys. The line between objective and subjective isn't always as sharp as people would like to think it is.
Just so we are clear, the bias is inherent in the sample pool... which is CR's own loyal following. What is interesting here is that as they largely, but not totally, reinforce their own purchase behavior, as it is frequently what CR recommended in the first place, because they are taking CR's recommendations to heart. It's a viscious circle in a sense.
Anyway, moving on from that, you are right that there is a bias in all the different publication's results. It is unavoidable to a degree. Also, it brings up the inconsistencies of similar tests and surveys. What factors could cause different results? Yet, so often we see different results.
In other words, why do we see very different test data on the same car from different publications? Perhaps no two cars are exactly equal? Drivers are different, and conditions are different. But sometimes the data results are too far apart to be attributed to normal factors. Is it bias? Was it deliberate? What is the reason?
Why do two companies conduct similar surveys and get different results?
It's simply inherent in the process. These inconsistencies themselves are proof that there are "distortion" and "bias" factors among others in the tests and surveys. Otherwise the results would be the same. Best to go with the concensus, IMHO.
Hope that is clearer.
TagMan
Actually, if anything I would claim "American car bias!" for CR in ranking the Focus as high as 2nd in the summer of 2007, above cars like the Mazda3, Sentra, and Elantra. All are superior to the Focus, at least prior to the 2008 Focus, IMO. I haven't sat in or driven the tweaked 2008 Focus so I can't comment on that car.
You are right about one thing, though, re the spindly aliens: according to a long-legged co-worker, the Civic fits him better than some mid-sized cars--lots of leg room up front.
Ah, but it's not a totally unbiased questionnaire. The entire questionnaire hinges on a value judgement by the respondees, to wit, the survey asks about problems that are "significant." What does "significant" mean? That is in the opinion of each person answering the survey. Hence all in all it's a subjective survey because it reports only on problems deemed "significant" by the respondees. From participating in this forum for many years, I know that some folks think a burned-out headlamp is a big deal, while others think that having to take the car in for a recall and fix for a transmission problem that could cause transmission failure and a crash is no problem at all because the manufacturer took care of it at no cost. :surprise:
That being said, I still find value in CR's reliability survey, due to the large sample size and the fact it goes back so many years. It's not a perfect survey, but what else is perfect?
Nothing is perfect but a relatively small sampling of randomly selected owners would still be far more reliable, statistically speaking, than a very large nonrepresentative sampling of owners. Beyond that, reliability is in the eye of the beholder.
tidester, host
SUVs and Smart Shopper
Yet the EPA got 18/24 MPG for the import with mixed driving at 20 MPG, and 17/26 for the domestic with mixed driving at a higher 21 MPG.
Can someone please explain why Consumer Reports blatantly lied about the import's MPG figure, yet was honest about the domestic's MPG figure?
link title
GM spent a whole lot of manpower trying to figure out how CR weighted the various vehicle attributes toward a final recommendation. Last I knew they never could figure it out. They would get it close for a number of vehicles and then some others did not follow the same equations.
That's because CR throws in their own opinion of the cars to help "weight" the meaning of the results from the questionnaire.
Re: Meaning of questionnaire results. Again, the method of "sampling" used by CR is to include a questionnaire to some or all of their subscribers. It's up to the recipients to decide whether to fill out and return the questionnaire. There may or may not be personal motivation or pride or anger at the car they allegedly own to fill out the questionnaire.
With JD Powers the recipients are owners; I assume they get their data from the 48/50 states' new car registrations or the manufacturers themselves. With CR they don't know if the recipient owns a 2008 Malibu or not; or 2006 Civic, etc.
The methodology and interpretation methods are very anecdotal at CR, to be generous.
Reading their car group tests and reviews carefully, like a reference for a potential employee, they sometimes show their preference. I.e., the Avenger has a Huge blind spot in the C-pillar are. They show a picture to illustrate. The picture of the Accord has been positioned to not show the C-pillar blockage and A-pillar size for the Accord.
Hmmmm.
Having gone through several survey setups used locally when a combined fire and EMS department was to be split and having called to ask about the entire questionnaire for a survey done by a local "survey" group, it's obvious the questions and the method was to keep the results interpretable for the combined fire and EMS unit rather than one political entity operating their own (much more efficiently and effectively) and the other agency, currently the recipient of tax money benefit without having to effect good service times, would keep their old system and have to fund it better out of their own money. The representative at the survey group business was all bubbles of knowledge until I started asking about the pattern of the questions. The earlier questions led the respondent into a thinking mode for the next question.
I think someone, somewhere, posted a link to an alleged copy of the CR questionnaire. I'll see if I can find that link.
2014 Malibu 2LT, 2015 Cruze 2LT,
Please explain how this works. Why would 1,000 CR Malibu owners give a different answer, on average, than 100 'randomly selected' owners?
I don't understand how this would work. Lets say I am a CR subscriber and own Chrysler and Ford products. Do I beat them up in my survey of "significant" problems because other subscribers typically own Hondas and Toyotas? :confuse:
Refer to post #223.
TM
BTW, when are you going to respond to post# 196.
There is no explination; domestic owners are no more discriminating than import owners.
The litmus test was the recent problems Toyota has been having with their Camry and Tundra: When these two vehicles had some minor engine/drivetrain problems, the owners responded by downgrading these cars from "Excellent" to "Below Average."
For this reason, we prove that there's no disparity between domestic survey ressults and import survey results.
"More than most cars, the S2000 efficiency is largely based on how you drive. If you keep the RPMs low (under say 5K), the CR numbers might be low. If you hit the fun range, the mileage drops considerably."
Now I already showed that the EPA got 21 MPG mixed driving in the domestic coupe and 20 MPG mixed driving in the import coupe.
I also showed that Consumer Reports got 23 MPG mixed driving in the domestic and a whopping 25 MPG in the import.
That means, Consumer Reports deviated from the EPA by this much:
23 MPG - 21 MPG = 2 MPG deviation (Consumer Reports 23 MPG - EPA 21 MPG = 2 MPG deviation for the domestic))
25 MPG - 20 MPG = 5 MPG deviation (Consumer Reports 25 MPG - EPA 20 MPG = 5 MPG deviation for the import).
"PMC THE FOURTH, YOU MEAN TO TELL ME THAT CONSUMER REPORTS DEVIATED 5 MPG IN THE IMPORT, YET ONLY 2 MPG IN THE DOMESTIC?"
Yes, manamal, according to both the EPA and Consumer Reports, that's what I'm telling you. In fact, Consumer Reports got 25 CITY/30 HIGHWAY for the S2000. Everyone else got on average, 5 MPG less on that car [see footnote].
"BUT PMC, the engine in the Honda S2000 is so radically different from other 4-cylinders that the EPA, ConsumerGuide, Automobile, Car and Driver, EVO magazine and Road and Track ARE ALL WRONG AND CONSUMER REPORTS IS THE ONLY ONE THAT'S RIGHT!
"PMC, I believe this from the bottom of my heart. It has to do with the S2000's 8,000 RPM redline, I think. That's what must be what's causing Consumer Reports to return such favorable MPG figures for this import, while everyone else -- including the S2000 owners -- are reporting MPG figures for the S2000 that are far lower and more in-line with the EPA estimates."
"Wham-bam, thank you, ma'am!" said Jeffery Skilling to his investors...
[footnote]
ConsumerGuide gets less than 25 MPG highway for the S2000. Edmunds gets 20/25. Automobile and Motor Trend gets 20/26.
Consumer Reports gets a far higher 25/30 for the S2000 because the Honda S2000 is an import.
There is just no other explination.
I'll bring you folks my next C/R Bust next time I'm at the supermarket. Until then.
This was not due to Consumer Reports listing the Honda Civic as their best small car for 2006, that came in Feb 2006, with the introduction of Honda's redesigned car.
The older, not redesigned Civic edged out the Focus not because of Focus's poor offset crash test results as stated here, because if it were, we'd see the Civic crash test results from Consumer Reports, as well.
Since we don't see a fair comparison between offset crash test results between the Civic and Focus in early 2005, we can conclude that Consumer Reports brought the Civic to 71 and Focus to 70 because of an anti-domestic bias.
The owners are not but the main stream media sure is.
Toyota can have engine sludge problems and Honda can have bad odometers and it is a blip on the radar that gets buried on page 32 of the local section between the obituaries and the personal adds.
Let a domestic have a a recall and it is front page above the fold news.
I promise you that if for instance Ford would have had a bunch of bad odometers the media would have been screaming for us to have to have all those units be hit with a branded title for unknown miles.
Instead they let Honda quietly settle a class action law suit and it all goes away.
Why? Because all of the talking heads have been bragging on the imports for so long that anything bad makes them look stupid and bruises there ego.
Now that Toyota is getting in the truck biz they are fixing to get a good dose of it to. They are no longer the nice little car maker that produces nothing but but 30 MPG cars. Now they are just like the Big 3 Domestics and produce those awful trucks that get 15 MPG.
I know you stated that, I am trying to verify it. What specific issue was this in?
Through my library, I have online access to electronic copies of all but the three most recent issues. I tried to find this but a search for "Civic" and a search for "Focus" turned up nothing in late 2005...just the May article and then Feb 2006.
The misperception regarding domestic/import reliability/unreliability is rampant.
If you told most folks that Buick recently received reliability grades that were essentially equivalent to Lexus, what do you think their reaction would be? Try it, and you'll see for yourself. Many don't believe it, and even after you tell them, they think it was a quirk.
In fact if you simply asked most folks what their opinion is before knowing any facts at all, it would become apparent that many have the preconceived idea that Japanese cars are better than American cars.
Now, that said, the domestics have themselves to blame for a large part. Ralph Nader blew the whistle on them long ago, and in some sense, they have never fully recovered from the reputation of building unsafe junk as compared to those imports... particularly Japanese imports... which also have a hangover reputation... allowing them to benefit unfairly from publicly biased favorable views that are engrained in a large percentage of the population. Does CR perpetuate this idea?
To some extent there is the strange perception that the Japanese can do no wrong, and concurrently they can get away with a lot... while the domestics can do very little right. and can't get away with much of anything. There should be no doubt that the domestics are held to a different standard by a large percentage of the population, and clearly the media.
That difference in that "standard" effects any survey that is taken. What becomes a "significant" issue for a domestic car is, in fact, perceived differently than what is a "significant" issue for an import. A very interesting perceptual (and pre-conceptual) difference. And it gets down to the main problem that we've been talking about here. The bias. It's there. It's visible. It's real.
TagMan
The one test that bothered me was the LaCrosse and the mileage they got for that car was totally unreasonable for the motor and vehicle weight. I would like to know more about their methods for testing on the loop that they use. I wonder if they use multiple drivers? I wonder if they repeat the test on different days? I wonder if they try to drive the car atypically. I other words do they try to drive the LaCrosse like an Acura giving lower gas mileage.
I'm not trying to impart evil intentions on their part necessarily. I'm wondering how it may actually work when and where they do their mileage testing.
2014 Malibu 2LT, 2015 Cruze 2LT,
TM
BTW I could really care less what CR says about anything, I just enjoy the conversation.
LOL... next thing you know the vehicle Monroney labels will have the CR fuel economy ratings instead of the EPA ratings.
Is anyone going to suggest that CR's ratings are conducted and derived more accurately than the EPA's '08 ratings?
TagMan
I would say the biggest misperception is that everything japanese is reliable. For some reason people extend Honda and Toyota reliablility data to other makes that happen to come from Japan. From CR's own published chart (April 2007), Nissan comes in 28% below average, this is lower than Chrysler, Dodge, Chevrolet, Pontiac, Buick, Ford, GMC, Lincoln, and Mercury. They come in ahead of only 3 American Makes: Saturn, Cadillac and Jeep
True... and it even extends beyond cars... with the misperception that some of the Japanese Consumer Electronics are better than they really are.
We do see the way that misperception assists the less-reliable Japanese marques, because they benefit just by association.
I give lots of credit to the Koreans for achieving so much in such a short period of time (Hyundai, Samsung, etc.)... then again, they are Asian... aren't they, grasshopper?
What do you think the early perception will be of the Chinese cars? Reliable? Cheap? Or cheaply made?
TagMan
These charges of bias by CR against American cars are understandable, though. It's a lot easier to say that an organization is biased and lies about its test results because of that bias, rather than acknowledging that some cars that come from outside the U.S. are better than U.S. cars.