The power NOT to get too agitated!!!

I really didn’t want one of my first few postings to be complaining, or getting agitated. But this is something that is a HUGE annoyance of mine, and has been for a while…it’s good to get things off the chest right?!!? The upcoming highlights what is not perfect about the scientific and publication process; namely, the desire for publication, the review process, and more importantly the overarching scientific process.

The need for power

A lot of my students want to measure power. Well who in the strength and conditioning (S&C) community doesn’t…it turns out  not many; searching “power” in the Journal of Strength and Conditioning Research pulls up 2200 results!!! This isn’t surprising. The S&C environment is all about improving the physical make-up of the athlete. So anyway, my students want to measure power within their testing. That’s great; they are S&C masters students after all, so this makes sense.

Power can be defined using a number of definitions but one of the more common ones (and the one we’ll use here for reasons to become more obvious) is the product of force and velocity;

power = force * velocity

English: Photo of weightlifter Jason Robert Cl...

English: Photo of weightlifter Jason Robert Clean and Jerk 192.5 kgs Silver Medal 1990 Commonwealth games Auckland New Zealand (Photo credit: Wikipedia)

Measuring power

Based on the idea that power is the product of force and velocity, we can then deduce that if we know force and velocity, we can calculate power (power=force*velocity). This deduction is correct. However, and this is where people sometimes falter, the important part is that we are measuring force and velocity of the same thing!

One such example is in a recent article published in the Journal of Strength and Conditioning Research addressing the effect of interrepetition rest on power output in the power clean (1). The setup the authors used incorporated measuring ground reaction forces from a force plate, and velocity of the bar using a linear position transducers (LPT; this is a device that attaches a cable to the bar and measures how fast the cable gets pulled out of the device when the bar is moved) attached to the bar. The authors have referenced some previous work addressing the methodology of calculating power using combinations of LPTs and force plates  (2). The resultant setup provided velocity-time and force-time variables collected using the LPT and force plate respectively.

Importantly, it was not taken into account that in using this setup, the authors (1) are measuring velocity of the bar, but force of the whole system. This prevents a valid calculation of power through the product of force and velocity. This might not be so much of a problem in a back squat where the bar moves with the body, but in the clean, the bar moves completely different to the body. The power calculated in the study is completely invalid!!!

How did this get published?

That’s a good question. And this is where the problems with publications and peer-review come raise their heads. This paper was published in a peer-reviewed journal. Which means (from memory, without checking the specific journal) at least two “experts” in the field reviewed the paper before it was accepted for publication.

Before I go on, I need to say that overall, the peer-review process is actually very good, and does enable great science to come through (this is worth a quick read). But I digress…

The first issue. These “experts” that review it, may not actually be the real experts in the field. Commonly, journals send reviews to anyone who has already published in that field, or even in the journal (I’ve even been sent papers to review from a journal that I have sent one article to; which subsequently got rejected!!!). Additionally, us academics are busy creatures. We like to say yes to everything, and then realise after that we don’t have the time. So that’s where students come in handy. It is quite common for the expert reviewers to hand a paper to their PhD student to review (it’s presented as a great learning process).

But the authors (1) referenced their methods to another, peer-reviewed, publication (2). So surely this is good. Yes, that is good…but then this previous paper also didn’t fully address the same flaw. Interestingly, another paper by Hori (3) was also published in 2007 looking at this methodological issue. The main focus of Hori’s paper was as above, in that calculating velocity of the bar, and force of the whole system (person + bar), is a completely invalid way of measuring power when the movement is a clean i.e. the bar and person move differently. However, this other article was not referenced in the paper by Hardee (1). But it was in Cormie’s (2) paper…albeit rather limitedly.

So why didn’t anyone question this

(well other than Hori and colleagues)?

Again, good question. Surely somebody must have done at some point?!? But then there are always some things that find their way through the net. This could have been prevented through:

  • Authors being more thorough in their reading i.e. Hori’s paper in (1)
  • Authors taking more emphasis of cited work i.e. Hori’s paper in (2)
  • Somebody thinking about the validity of their methods, and not just taking it at face value
  • A more stringent peer-review process – whoever or however this was reviewed, it was either rushed, or not done by an expert in the field.

It’s not all doom and gloom

Since my introduction to the S&C research environment a few years back, I have found it is an exciting place to be. There is a huge clash of theoretical scientists and practitioners. We are slowly amalgamating, but whilst we do so there is going to be some errors; practitioners doing research on their own and scientists trying to train athletes. Science is fun. It’s exciting. It draws you in. It seems easy. It seems straight forward. But this is where you fall down. Science is not about quick answers. It’s an ongoing process. You don’t “just do” science. You need to read, listen, talk to others. You need to learn and understand what you are doing. This is why the majority of good scientists spend so long in education; Bachelors, Masters, Doctorates, Post-docs (although the later isn’t really a qualification, it does contain a senior academic guiding you).

Don’t get me wrong, there is some excellent research being published in this area, but there also seems to be some very questionable research too; the above is not an anomaly. Scientific research into the applied S&C field is relatively new. Mistakes and errors will happen. We just need to have the processes properly in place to nurture it properly.

References

(1) Hardee, J. P., Triplett, N. T., Utter, A. C., Zwetsloot, K. A., & Mcbride, J. M. (2012). Effect of interrepetition rest on power output in the power cleanJournal of Strength and Conditioning Research / National Strength & Conditioning Association26(4), 883-9. doi:10.1519/JSC.0b013e3182474370

(2) Cormie, P., McBride, J. M., & McCaulley, G. O. (2007). Validation of power measurement techniques in dynamic lower body resistance exercises. Journal of Applied Biomechanics, 23(2), 103-18.

(3) Hori, N., Newton, R. U., Andrews, W. A., Kawamori, N., McGuigan, M. R., & Nosaka, K. (2007). Comparison of four different methods to measure power output during the hang power clean and the weighted jump squat. Journal of Strength and Conditioning Research / National Strength & Conditioning Association, 21(2), 314-20. doi:10.1519/R-22896.1

Advertisements

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s