<VV> Consumer Reports bias, was: Politics on Virtual Vairs

Bill Elliott corvair at fnader.com
Fri Mar 12 08:12:19 EST 2010


That's easy.

The performance testing they do in their lab has nothing to do with the 
reliability ratings, just the performance ratings. But given the often 
somewhat different  performance ratings of identical products, one has to 
question if the differences are due to manufacturing variability or testing 
variability. BUT, their reliability data certainly weighs into whether 
something is CR "recommended" or not, so people read that as being part of 
the performance rating.

But Jeff's point is proven on survey driven reliability data where two cars 
which came down the same assembly line and vary only in labeling are rated 
sometimes dramatically different, with the American label product always 
reliably far behind the identical but Japanese labeled product.

A big part of it is expectations... you _know_ American cars are unreliable 
and that Japanese cars are reliable, so the same failure in both cars 
reinforces your belief on American cars but you think must be just an 
unusual incident on the Japanese car. So faults tend to be underreported on 
Japanese cars and over reported on American cars.

There is also the difference in how the cars are maintained (American cars 
tend to be maintained much more poorly by their owners than Japanese cars) 
and in some cases (brakes for instance) what the Japanese car owner sees as 
routine maintenance the American car owner sees as a repair.

Anyway, if you've taught statistics for 30 years, you know that voluntarily 
submitted survey data is neither scientific in nature nor really 
statistically quantifiable.  You have to question what motivates someone to 
send in the survey. From what I've seen on ownership attitudes, there is 
also likely a tendency for Japanese car owners to send in clean reports to 
"brag" about their car's reliability while American car owners are more 
likely to send in only negative reports... not being motivated to return the 
survey if they've really had no issues.

If you've done one of these surveys, it's also not difficult to see how some 
questions could be subjectively answered differently given the same actual 
failure... so the data is not purely objective either.

You likely recall the studies on Neilson TV ratings and how dramatically 
different the results were between diary submissions (what people report 
they were watching) and the results of actual Neilson boxes attached to the 
sets.  The dairy submissions consistently and significantly over reported 
watching PBS, educational programming, and news programs while the objective 
data showed a lot more "trash tv" being consumed... people reported what 
they thought they should be watching (and maybe even convinced themselves 
they were) instead of what they actually were... the CR surveys are more 
like those diaries than the Neilson boxes.

Other factors (attitude of dealer repair shops, availability of loaner cars, 
ease of warranty claims) play a role as well, but have little to do with the 
actual reliability of the car, just the owner's attitude about the 
reliability. And that's what consumer reports is really measuring.

(Don't get me wrong, I think actual reliability plays a role in the attitude 
and the ratings are probably "directionally correct" if questionable 
quantifiably.)

There is the old joke that three sports cars owners have the same 
transmission failure. The Jag owner brags that he had to spend $4000 to get 
his transmission repaired and that is the price of driving such a beautiful 
car, the Corvette owner complains that he had to spend the same and that's 
the price of driving an unreliable American car, but the Porsche owner 
quietly has his car towed to the dealer in the dead of night and tells him 
"Please don't tell anybody I broke the car" because as everyone knows, 
Porsches never break and are more reliable than Jags or Corvette...

Bill

----- Original Message ----- 
From: "Ken Wildman" <k-wildman at onu.edu>
To: "'Geoffrey Stozek'" <gjjs at att.net>; <VirtualVairs at corvair.org>
Sent: Friday, March 12, 2010 07:19
Subject: Re: <VV> Consumer Reports bias, was: Politics on Virtual Vairs


Could you explain how cognitive dissonance fits in here?

BTW, the only consumer input is from their annual survey which looks at
reliability data - how many repairs, length of ownership, etc.  They do
actual testing in their labs.  Having taught statistics for over 30 years I
think that few in the general public understand meaningful statistics.

Ken

-----Original Message-----
From: virtualvairs-bounces at corvair.org
[mailto:virtualvairs-bounces at corvair.org] On Behalf Of Geoffrey Stozek
Sent: Friday, March 12, 2010 12:32 AM
To: VirtualVairs at corvair.org
Subject: Re: <VV> Consumer Reports bias, was: Politics on Virtual Vairs

I gave up on Consumer Reports years ago after I discovered that it was
nothing more than consumers opinions which are substantially influenced
by things like cognitive dissonance. I don't want opinions. I want facts
or meaningful statistical data. Consumers Reports ain't it.
Jeff Stozek
NE IL



More information about the VirtualVairs mailing list