Crash tests conducted by the government and insurance industry may be a poor indicator of how safe -- or unsafe -- pickup trucks are, an independent study suggests.
The trucks that performed badly in simulated crashes may be safer than their ratings suggest and those that performed well may be more dangerous, according to research conducted by economics professors at Virginia Commonwealth University in Richmond. The study appeared in the journal Accident Analysis and Prevention.
The professors compared crash test data from the National Highway Traffic Safety Administration (NHTSA) and the Insurance Institute for Highway Safety (IIHS) with government data on fatal crashes. They concluded that the tests seem to be an accurate guage of car safety: Cars with higher crash-test ratings show fewer driver fatalities and those with lower ratings show more. But the same does not hold true for trucks.
"Something about trucks makes them different," said David W. Harless, who co-authored the study with George E. Hoffer.
The study looked at changes in crash test ratings for all cars and trucks tested at least twice by the NHTSA from 1987 through 2001 and by the IIHS from 1995 through 2001. It compared the ratings with government data on fatal crashes over the same period to determine whether improvements in crash-test ratings translated to fewer driver fatalities.
The study found that higher crash-test ratings often correlated with fewer driver fatalities for cars, but not for trucks. The NHTSA ranks vehicles on a scale of one star (the worst rating) to five stars (the best). The IIHS uses labels that range from "poor" to "good."
"We had trucks with one-star NHTSA ratings being 3 percent safer than those with five-star ratings," Harless said. "With the IIHS, which was a relatively small sample, trucks with a 'poor' rating had 30 percent less risk [of driver fatality in wrecks] than trucks with a 'good' rating."
To reduce the likelihood of variables such as a vehicle's size and weight skewing the results, the study compared changes in crash data only within vehicle lines and not between different types of vehicles. For instance, Ford F-150 pickup trucks were only compared to other Ford F-150 pickup trucks.
The NHTSA would not comment on the Virginia Commonwealth study.
Russ Rader, an IIHS spokesperson, took issue with the findings. Rader said that the IIHS's own research shows a strong correlation between crash-test ratings and fatalities for all kinds of vehicles, including trucks. "It doesn't make sense that there would be a difference" between cars and trucks, he said. "The things you do to a car for crash safety are the same things you do for SUVs and pickups." For example, all vehicles now incorporate crumple zones and reinforced passenger-cabin structures.
George E. Hoffer, a co-author of the study, said the body structure of trucks may be one reason why their crash test ratings failed to correlate with driver fatalities.
Unlike cars, pickup trucks have stout steel frames underneath, often called "ladder frames." Hoffer said a ladder frame might act like a "pronged battering ram" in a real-world crash where a truck collides with a lighter, smaller vehicle. This might make a truck more likely to withstand the impact of an actual crash better than it does when striking a stationary barrier such as in some crash tests.
Ford safety spokesman Dan Jarvis disputed the study's findings and methodology. He said the idea that a truck's ladder frame has any discernible impact on crash test ratings is "at best uninformed."
Harless cited the Ford F-150 two-door pickup truck as an example of how improvements in crash test ratings do not result in fewer driver fatalities. For the 1992-1993 model years, the F-150 received an NHTSA rating of three stars, Harless said, but the driver death rate was about one death per 10,000 registered vehicles. For the 1994-1996 model years, the F-150 received a five-star rating, but the driver death rate rose to nearly 1.2 driver deaths per 10,000 vehicles.
Independently of Harless, Rader of the IIHS also used the F-150 to illustrate his opposing view that a noticeable improvement in crash test ratings between the 2001 model and the 2004 model correlates with fewer driver fatalities. According to the IIHS's own analysis, the 2001 Ford F-150 had 118 deaths per million per year between 2002 and 2005 when it was rated as "poor," Rader said. But the 2004 F-150, with an improved crash test rating of "good," had a much lower driver death rate of 58 per million vehicles per year.
This is a good indicator of how evaluating crash-test ratings can be confusing. In this case, the study authors pointed to the NHTSA ratings for the F-150, not the IIHS ratings. The study also looked at the F-150 in different years than those cited by Rader.
When asked about the IIHS findings, Hoffer said that he did not doubt their accuracy, but that the model years that Rader gave were different than those used in his own study.
The results for any one particular vehicle aside, Hoffer said the data for cars and trucks overall is what offers the most telling insight. He said there was no relationship between improvements in crash test ratings and driver death rates for "the entire population" of trucks they evaluated, which include model years 1987-2001.
The conclusion the professors reached was that NHTSA and IIHS crash test data shouldn't be as important of a consideration for pickup truck buyers as it is for passenger-car buyers, because for cars the correlations between ratings and fatalities are stronger. "The architecture of trucks is so different that it overwhelms any information that the consumer can glean from the crash test," Hoffer said.
Anne Fleming, a spokesperson for the IIHS, said that the institute had no evidence that ladder-frame construction, also called "body on frame" construction, typically used in pickup trucks, has any effect on crashworthiness.
Though Ford's Jarvis also disagreed with the professors' conclusions, he did think the study illustrates something important. "The point is well taken that it's indeed very difficult to correlate [crash-test ratings] and real-world crash events," he said. "But that difficulty arises from several factors, including the complexity of the real-world crash environment, and a variety of non-vehicle factors, like was the occupant wearing a safety belt, or was the vehicle pulling a trailer?"
Charles Territo, director of communications at the Alliance of Automobile Manufacturers, also agreed that directly correlating crash test ratings and fatalities is tough. "We don't know what percentage of accidents involved people not wearing seatbelts, being under the influence, being ejected from their vehicles," he said. "There are so many different factors at play that it's just difficult."
The bottom line, Territo said, is that regardless of any conclusion the VCU study made, crash test ratings are still a valuable resource for consumers, but shouldn't be the only consideration when shopping for a vehicle.