Spurred, in part, by Drugmonkey's post, I have been thinking further about the analysis I did relating the number of publications to the number of authors per publication. I realize that I did not fully grasp the implications of my results. A key question is whether an increased number of publications per investigator can be accounted for by an increased number of authors per publication.

Two limiting cases can be considered. In the first case, the average number of authors per publication would be essentially constant, regardless of the number of publications by a given investigator. This case is ruled out by the data presented that show that the average number of authors is positively correlated with the number of publications with a correlation coefficient of 0.47.

In the second case, the average number of authors per publication increases directly with the number with the number of publications. In this case, the plot of the number of publications weighted by 1/the number of authors versus the number of publications (the plot highlighted by Drugmonkey) would be a line with slope 0. However, the trend line in the plot has a substantial positive slope. Previously, I focused on the fact that the correlation coefficient for this plot is large (0.83). However, this is not really the point. It is not surprising that the weighted number of publications is relatively well correlated with the number of publications. It is the slope of the line that conveys the information.

Simulations suggest that the slope line in this plot is close to what one would expect if the number of authors per publication were constant. Another way to see the same point is to consider the average number of authors for investigators with the smallest and largest number of publications.

20 investigators with the fewest publications: Average number of publications-9.1, Average number of authors per publication-4.6

20 investigators with the most publications: Average number of publications-57.4-Average number of authors per publication-7.3

Thus, while the number of publications increases by a factor of 57.4/9.1 = 6.3, the average number of authors per publication increased by a factor of 7.5/4.6 = 1.6.

These data do not support the notion that the increased number of publications is due primarily to an increased number of authors per publication.

As I posted subsequently, a major factor contributing to the number of publication is the amount of support that each investigator has been able to garner.

do not support the notion that the increased number of publications is dueprimarilyto an increased number of authors per publicationMy point was not that this was the primary driver but rather to ask if there was evidence that each additional author, on balance, made a contribution. That is, can we determine if more science was conducted than would have been accomplished by dropping some of the authors. You are assuming each author contributes the same amount ("directly") but I was not, merely asking if there was evidence of

somecontribution.I think that the fact that the slope of the line is not zero is the evidence for this, as is the appearance for a linear relationship. We might hypothesize that past a certain size of author list the curve would flatten to zero but this doesn't appear to be the case.

I did not mean to misrepresent your position.

I think the results suggest fairly strongly that "honorary authorship" is not occurring to any appreciable extent within this set of publications leading to inflation of publication numbers. It is not just that the fact the slope is non-zero, but also that the magnitude of the slope is what you would expect without any appreciable authorship inflation.

I do not think this depends on each author making the same contribution to any given publication. This is clearly not the case.

This is just one of those things in science careers discussions that pops up, people seem *very* emotionally invested, and I can't understand the problem. Truly non-contributing authorships seem a tiny anecdotal issue to me. But some people appear to think it a huge systematic ethical conundrum of science. Your data speak to this.

For example http://drugmonkey.scientopia.org/2011/11/04/wherein-lies-the-harm-of-so-called-courtesy-authorships/

I have certainly seen examples of "courtesy authorships" in my day, but I think they are relatively rare these days for a variety of reasons and even rarer in folks at this career stage. I agree, these data do not support that this is a widespread phenomenon (if it occurs at all) in these publications.