I would have to disagree with not giving the test any value. The test was performed with the same shooter shooting all the bows. That is a constant at least as accurate as someone walking around a trade show, with their own arrows in their back pockets, and trying out a lot of different bows from different bowyers. Much like a lot of tradgangers will be doing at the K-Zoo show in a couple weeks. So that warrants some value in my mind.
I analyzed the data a little bit and being the math and statistics nerd that I am, I think the data shows some significance. For sure in my mind, far and away from valueless data...
The first thing I did was look at the array of data and try to organize it with standard values. Average / Standard Deviation / Percent Standard deviation of the averages.
This will basically show us how much variation in the data there truly was. As you can see, there was only a standard deviation of 3.95 pounds of actual bow weight between the data range. Statistically speaking, thats only a 6.83% variance from the average bow weights in that range. In my eyes, thats pretty darn close considering Mike from Bowhunting World called the bowyers up and said "send me a bow, I want to do an article on recurves".
Next I looked at the velocity data. Again, a standrd deviation of only 9.57 ft/s on this data range. Only a 5.78% difference compared to the average value. Again, very close considering the finger release, and man power shooting method of the test.
All bows to the best of my knowledge were strung using the manufacturer's recomended brace heights. So thats a non-issue, as thats what you'd be shooting the bow anyhow. Again, walking through an expo, you'd be picking bows up that were set up by the bowyer to get the best performance out of that particular model. So, I think the brace height differences are a non-issue for this data range.
Next, due to the questionable testing with string silencers, bring your attention to the right set of data. What I did here was take the standard deviation value from the original test data for velocity - 9.57 ft/s.
Any bow that had string silencers on the string during the test, I added the standard deviation velocity of 9.57 ft/s to its original test speed. And for any bow that didn't have string silencers on it, I subtracted the standard deviation velocity from its original test data.
Basically, we all know that shooting bows without silencers makes the string rebound faster, making the arrow speed faster. So I slowed the bows down by the standard deviation that didn't have silencers and sped the bows up by the standard deviation for bows tested with silencers.
With the new "adjusted velocities" I then calculated the efficiency value again.
From this you can see, that 5 out of the top half (6 of 12) stayed within the top half. The only bow to get bumped from the top half of the field was the Martin being replaced with the Fedora.
That, in my opinion, warrants some attention and value...
Now, there was another piece written in the same publication for the Bob Lee Ultimate. (you may be aware that I own a new Ultimate, so I'll try not be be biased here...) Mike at Bowhunting World got an Ultimate in his hands and ran the exact same test as the other 12 bows. An efficiency value of 3.4!
Looking at the lower left yellow data range we will see that the Ultimate was roughly 18.5% more efficient that the average efficiency value in the original tests, and 6% more efficient that the most efficient performer on the original test...!
Now thats impressive!!!
Just my $0.02