R01-equivalent PIs: 1985-2014

(by datahound) May 28 2015

I recently posted data on the number of unique NIH PIs for all mechanisms listed in the NIH RePORT database.

I have now analyzed data for R01-equivalent grants (primarily R01s but also R23, R29, and R37 (MERIT) awards) as shown below:

R01 PI plot

This shows curves for all PIs (including multiple PIs) and for Contact PIs only. These curves clearly reveal the impact of the NIH budget "doubling" from FY1998 to 2003) and the subsequent decline due to the worse-than-flat NIH budget over the past 12 years (with the exception of the ARRA) funding.

The correction for multiple PIs is significant (although, of course, being PI on a multiple PI grant likely provides fewer resources than being the sole PI on an award of the same size). The 3564 New (Type 1) R01 grants in FY2014, 771 had multiple PIs.

8 responses so far

The Number of NIH PIs 1985-2014: The Effect of Multiple PIs

(by datahound) May 28 2015

I recently posted a somewhat startling curve showing the total number of NIH contact PIs for all mechanisms in the NIH RePORT database. This showed a drop in the total number of PIs from FY2010 to the present.

As I lay awake thinking about this curve and what might mean, I thought it might change somewhat if I included all PIs instead of just Contact PIs. Recall that the NIH multiple PI policy only went into effect in around 2005.

I was able to examine this point relatively quickly. The results are shown below:

NIH PI Plot wNonContact

 

This shows that the inclusion of all PIs decreases the magnitude of the drop since FY2010.

Some other interesting statistics about non-Contact PIs are:

Total Contact PIs:  216,521

Total PIs listed as other than Contact PI:  11,504

PIs who have never been Contact PI:  2,873

 

10 responses so far

Analysis of Subsequent Years of K99-R00 Program

(by datahound) May 28 2015

I had previous done some analysis of the NIH K99-R00 program for the first two cohorts.  I wrote R scripts to assemble information about the R00 and R01 (as well as DP1 and DP2) awards subsequently obtained by K99 recipients and to analyze these results. I included precise grant start and end times rather than simply fiscal years as I had done in my initial analysis.

The results for the first K99 cohort (from fiscal year 2007) are shown below. This shows the number of investigators (out of 182 initial K99 awardees) who had K99 awards, R00 awards, or R01 (or DP1, or DP2) awards aligned with the start dates for the initial K99 award at time 0.

2007 K99 Cohort Plot-3

This shows that more than 90% of these K99 awardees transitioned to the R00 phase and that more than 100 of these PIs had obtained at least one R01 (or equivalent) award as shown previously but now with more precision about the timing of these awards.

With these scripts in hand, it was straightforward to analyze subsequent K99 cohorts. The results are shown below:

 

K99 Awards Plot

 

This graph reveals that the overall pattern for the K99 phase is remarkably consistent from year to year, with substantial transitions at the end of year 1, a steady decline and then a sharp drop at the end of year 2, and the remaining ~20% of PIs transitioning off the K99 by the end of year 3.

The results for the R00 phase are shown below:

R00 Award Plot

 

Again, the pattern is quite consistent. The fraction of K99 awardees who have transitioned to the R00 phase is approximately 50% at the end of year 2 (since the start of the K99 award) and peaks at between 80 and 90% in the middle of year 3. The curves are different for the FY2010, FY2011, and FY12 K99 cohorts since they have not yet had time to fully transition, but the curves look quite similar for the regions that overlap the other curves.

The final curve shows the transition to R01 awards (I also included DP1 (Pioneer) and DP2 (New Innovator) awards).

R01 Award Plot-2

 

Here, the curves are more different. For the first (FY2007) cohort, more than 50% of the K99 awardees have transitioned to R01 funding. More than 40% of the FY2008 cohort have transitioned, but comparison of the FY2007 and FY2008 curves suggests that this cohort is transitioning more slowly or will not achieve the same level of the FY2007 cohort. This trend continues with the FY2009 cohort. Of course, these attempted transitions to R01 funding are occurring over the period where the overall number of NIH supported PIs dropped (as revealed in my previous post). The FY2010 cohort showed an initial burst above the FY2008 and FY2009 curves but has slowed since then. It is too early to say much about the FY2011 and FY2012 cohorts.

The ability to analyze these data in kinetic detail with relative ease allowed some comparisons that were much harder to make in my previous analysis. I am impressed with the continuing development of R by a large open community (especially Hadley Wickham) that are making R an ever-more-powerful tool.

14 responses so far

Analyzing NIH Data with R

(by datahound) May 28 2015

Most of the analysis of NIH data that I have done with NIH data has been done using Excel. While Excel does have some useful features, it has many limitations. My son who, as an actuary, does considerable data analysis for a living, urged me to migrate to a more powerful platform, R, for my analyses. He can be quite convincing and I have spent time over the past month developing some rudimentary R skills (in part through an on-line course). I am now fully convinced that he was right.

I downloaded all of the data used by NIH RePORTER (from NIH ExPORTER) and wrote R scripts to parse the data into a forms that could be easily analyzed by R. The full file has 1,907,841 grant records with readable contact PI numbers for fiscal years 1985 to 2014. These correspond to 216,521 unique contact PIs.

As an initial exercise with these data, I decided to plot the number of unique contact PIs as a function of fiscal years. The result is shown below:

Unique PI Plot-2

 

What I attempted as a test of my data analysis skills revealed a striking result. The number of unique contact PIs had grown almost linearly from 1985 to about 2009-2010 (the ARRA years) but subsequently dropped quite sharply from 2010 to 2014. This graph provide much clearer evidence for "the cull" than I anticipated.

Despite this bottom line, considerable work remains to be done to probe this further since this includes a wide variety of mechanisms. With the powerful file manipulation and analysis tools in R, this should be relatively straightforward.

Let the analysis begin!

33 responses so far

Please comment: NIH RFI on "Optimizing Funding Policies..."

(by datahound) May 07 2015

NIH released an RFI on April 2 on Optimizing Funding Policies and Other Strategies to Improve the Impact and Sustainability of Biomedical Research. Responses are Due by May 17th (10 more days).

Please take the time to go and provide input. My recent post on the potential emeritus award RFI should make it very clear that your input is necessary if you don't want the response to be dominated by those with quite different perspectives from yours.

Here is the link for the RFI and the comment areas are listed below to get your thinking started.


Please limit comment to a maximum of 500 words.


Please limit comment to a maximum of 500 words.


Please limit comment to a maximum of 500 words.


Please limit comment to a maximum of 500 words.
Now's your chance. There is really no excuse for not contributing your thoughts.

7 responses so far

NIH "Emeritus Award" RFI Results-Update

(by datahound) May 07 2015

Following on my previous post on the responses to the NIH RFI regarding a potential "emeritus" award, several commenters asked to see the responses. Unfortunately, WordPress does not appear to have a mechanism for posting such files. However, I have posted the spreadsheet through GoogleDocs. Please feel free to share your reactions.

As a further update, as first pointed out to me by @ChrisPickett5, the latest draft of the 21st Century Cures Act currently being developed by the House Energy and Commerce Committee includes a section about a "Capstone Award" (pg. 26-27).  It is quite odd to see a new grant mechanism from NIH being discussed as an addition to the law that governs NIH, as opposed to being developed by NIH using existing authorities. It is unclear if this is coming from the NIH or from one or more members of Congress interested in facilitating senior faculty transitioning out of NIH-supported research.

36 responses so far

NIH "Emeritus Award" RFI Results-FOIA Request-Initial Observations

(by datahound) May 06 2015

After hearing comments at the Experimental Biology meeting that responses to the NIH Request for Information (RFI) about a potential "emeritus" award were substantially more positive that those posted on the Rock Talk blog on the subject, I submitted a Freedom of Information Act (FOIA) request to obtain what I could about the RFI responses.

Yesterday (less than 6 weeks after I made the request), I received the response. The key item was an Excel spreadsheet with meaningful responses from 195 individuals and 3 scientific societies (American Society for Biochemistry and Molecular Biology, Genetics Society of America, American Association of Immunologists). The names and email addresses of the individuals (as well as some other bits of information) were redacted although institutional affiliation information was included where provided.

As a first pass at the analysis, I coded each response as Supportive of an Emeritus Award, Not Supportive of an Emeritus Award, or Mixed. The results were almost evenly divided with 92 Supportive, 85 Not Supportive, and 21 Mixed.

Some of the responses disclosed that the respondent was a senior scientist who would potentially have been or would be a potential applicant for an emeritus award. I searched the responses for such disclosures and identified 17 individuals. All 17 were supportive of the concept of a potential emeritus award.

I also examined the institutional affiliations of the respondents where provided. The institutions for which more than 2 responses were received included:

Harvard Medical School (including Brigham and Women's, Mass General, and Beth Israel Deaconess Hospitals) 11

Johns Hopkins University 6

University of Colorado 5

University of Washington 4

University of Michigan 4

University of Maryland 4

University of Massachusetts Medical School 3

Tufts University 3

University of Kentucky 3

 

Note that this parallels, to some extent, institutions that have a large number of grantees (Harvard Medical School, Johns Hopkins , University of Michigan, and University of Washington are in the top ten in terms of overall NIH funding. However, Harvard Medical School and the three affiliated hospitals listed account for approximately $300M in NIH funding (or ~1 %) yet they accounted for 11/198 = 5.5% of the responses; 7 out of these 11 responses were scored as positive.

I will continue to examine the responses and share some of the more interesting comments.

What are the take-home lessons here?

First, the response rate is typical for this sort of RFI at a few hundred responses. This represents a very small selection of the biomedical research community, substantially less than 1% of grantees and applicants. Note that I used the term selection instead of sample since their is certainly bias in who chose to take the time to respond.

Second, the responses are more substantially more positive than those seen on blogs. Of course, the blog response is likely biased toward those who are younger and more likely to be negative while the RFI response may be biased toward those with self-interested positions.

Third, the FOIA process here was relatively painless and quick in this case.

I urge you whenever NIH issues an RFI on a topic of interest to you or your colleagues, take the time to take a look at it and respond as appropriate. Your voice can't be heard if you don't speak out and it only takes a few minutes to respond.

26 responses so far

Science Article with an Analysis of NIH Peer Review

(by datahound) Apr 23 2015

In the current issue of Science, Li and Agha present an analysis of the ability of the NIH peer review system to predict subsequent productivity (in terms of publications, citations, and patents linked to particular grants). These economists obtained access to the major NIH databases in a manner that allowed them to associate publications, citations, and patents with particular R01 grants and their priority scores. They analyzed R01 grants from 1980 to 2008, a total of 137,215 grants. This follows on studies (here and here) that I did while I was at NIH with a much smaller data set from a single year and a single institute as well as a publication from NHLBI staff.

The authors' major conclusions are that peer review scores (percentiles) do predict subsequent productivity metrics in a statistically significant manner at a population level. Because of the large data set, the authors are able to examine other potentially confounding factors including grant history, institutional affiliation, degree type, career stage) and they conclude the statistically significant result persists even when correcting for these factors.

Taking a step back, how did they perform the analysis?

(1) They assembled lists of funded R01 grants (both new (Type 1) and competing renewal (Type 2) grants from 1980 to 2006.

(2) They assembled publications (within 5 years of grant approval) and citations (through 2013) linked to each grant.

(3) They assembled patents linked either directly (cited in patent application) or indirectly (cited in publication listed in application) for each grant.

There are certainly challenges in assembling this data set and some of these are discussed in the supplementary material to the paper. For example, not all publications cite grant support and other methods must be used. Also, some publications are supported by more than one grant and, in this case, the publication was linked to both grants.

The assembled data set (for publications) is shown below:

Science Figure

By eye, this shows a drop in the number of linked publications with increasing percentile score. But this is due primarily to the fact that more grants were funded with lower (better) percentile scores over this period. What does this distribution look like?

I had assembled an NIH-wide funding curve for FY2007 as part of the Enhancing Peer Review study (shown below):

NIH EPR Figure

To estimate this curve for the full period, I used success rates and numbers of grants funded to produce the following:

R01 funding curve graph

Of course, after constructing this graph, I noticed that Figure 1 in the supplementary material for the paper includes the actual data on this distribution. While the agreement is satisfying, I was reminded of a favorite saying from graduate school: A week in the lab can save you at least an hour in the library. This curve accounts (at least partially) for the overall trend observed in the data. The ability of peer review scores to predict outcomes lies in more subtle aspects of the data.

To extract the information about the role of peer review, the authors used Poisson regression methods. These methods assume that the distribution of values (i.e. publications or citations) at each x-coordinate (i.e. percentile score) can be approximated as a Poisson distribution. The occurrence of such distributions in these data makes sense since they are based on counting numbers of outputs. The Poisson distribution has the characteristic that the expected value is the same as its variance so that only a single variable in necessary to fit the trends in an entire curve that follows such a distribution. The formula for a Poisson distribution at a point k (an integer) is f = (λ^k*e^-λ)/k!. Here, λ corresponds to the expected value on the y axis and k corresponds to the value on the x axis.

Table 1 in the paper presents "the coefficient of regression on scores for a single Poisson regression of grant outcomes on peer review scores." These coefficients have values from -0.0076 to -0.0215. These values are the β coefficients in a fit of the form ln(λ) = α + βk where k is the percentile score from 1 to 100 and λ is the expected value for the grant outcome (e.g. number of publications).

From the paper, a model which includes corrections for five additional factors (subject-year, PI publication history, PI career characteristics, PI grant history, and PI institution/demographics (see below and supplementary material for how these corrections are included)), the coefficient of regression for both publications and citations is β = -0.0158. A plot of the value of λ as a function of percentile score (k) for publications (with α estimated to be 3.7) is shown below:

Distribution b=-0.0152 plot

The shape of this curve is determined primarily by the value of β.

The value of λ at each point determines the Poisson distribution at the point. For example, in this model at k=1, λ=39.81 and the expected Poisson distribution is shown below:

Poisson distribution-k=1 plot

There will be a corresponding Poisson distribution at each percentile score (value of k). These distributions for k=1 and k=50 superimposed on the overall curve of λ as a function of k (from above) are shown below:

Distribution plot curves

This represents the model of the distributions. However, this does not take into account the number of grants funded at each percentile score shown above. Including this distribution results in an overall distribution of the expected number of publications as a function of percentile score corresponding to this model shown as a contour plot below (where the contours represent 75%, 50%, 25%, 10%, and 1% of the maximum density of publications):

Poisson Curves Plot

This figure can be compared with the first figure above with the data from the paper. The agreement appears reasonable although there appear to be more grants with a smaller number of publications than would be expected from this Poisson regression model. This may reflect differences in publication patterns between fields, the unequal value of different publications, and differences between the productivity of PIs.

With this (longwinded) description of the analysis methods, what conclusions can be drawn from the paper?

First, there does appear to be a statistically significant relationship between peer review percentile scores and subsequent productivity metrics for this population. This relationship was stronger for citations than it was for publication numbers.

Second, the authors studied the effects of correcting force various potential confounding factors. These included:

(i) "Subject-year" determined by correcting for differences in metrics by study section and by year as well as by funding institute. This should at least partially account for differences in fields although some study sections review grants from fields with quite different publication patterns (e.g. chemistry versus biochemistry or mouse models versus human studies).

(ii) "PI publication history" determined by the PIs publication history for the five years prior to the grant application including the number of publications, the number of citations up to the time of grant application, the number of publications in the top 0.1%, 1% and 5% in terms of citations in the year of applications and these same factors limited to first author publications or last author publications.

(iii) "PI career characteristics" determined by Ph.D., M.D., or both, and number of years since the completion of her/his terminal degree.

(iv)  "PI grant history" categorized as one previous R01 grant, more than previous R01 grant, 1 other type of NIH grant, or 2 or more other NIH grants.

(v) "PI institution/demographics" determined as whether the PI institution falls within the top 5, top 10, top 20, or top 100 institutions within this data set in terms of the number of awards with demographic parameters (gender, ethnicity (Asian, Hispanic) estimated from PI names.

Including each of the factors sequentially in the regression analysis did not affect the value of β substantially, particularly for citations as an output. This was interpreted to mean that the statistically significant relationship between percentile score and subsequent productivity metrics persists even correcting for these factors. In addition, examining results related to these factors revealed that (from supplementary material):

"In particular, we see that competing renewals receive 49% more citations, which may be reflective of more citations accruing to more mature research agendas (P<0.001). Applicants with M.D. degrees amass more citations to their resulting publications (P<0.001), which may be a function of the types of journals they publish in, citation norms, and number of papers published in those fields. Applicants from research institutions with the most awarded NIH grants garner more citations (P<0.001), as do applicants who have previously received R01 grants (P<0.001). Lastly, researchers early in their career tend to produce more highly cited work than more mature researchers (P<0.001)."

So what is the bottom line? This paper does appear to demonstrate that NIH peer review does predict subsequent productivity metrics (numbers of publications and citations) at a population level even correcting for many potential confounding factors in reasonable ways. In my opinion, this is an important finding given the dependence of the biomedical enterprise on the NIH peer review system. At the same time, one must keep in mind the relatively shallow slope for the overall trend and the large amount of variation at each percentile score. A 1 percentile point change in peer review score resulted in, on average, a 1.8% decrease in the number of citations attributed to the grant. By my estimate (based on the model in this paper), the odds that funding a grant with a 1 percentile point better peer review score over an alternative will result in more citations are 1.07 to 1. The slight slope and the large amount of "scatter" are not at all surprising given that grant peer review is largely about predicting the future, is a challenging process, and the NIH portfolio includes many quite different areas of science.

One disappointing aspect of this paper is the title: "Big names or big ideas: Do peer-review panels select the best science proposals?" This is an interesting and important question, but the analysis is not suited to address it except peripherally. The analysis does demonstrate that PI factors (e.g. publication history, institutional affiliation) do not dominate the effects seen with peer review, but this is does not really speak to "big names" versus "big ideas" in a more general way. Furthermore, while the authors admit that they cannot study unfunded proposals, it is likely that some of the "best science proposals" fall into this category. The authors do note that some of the proposals funded with poor percentile scores (presumably picked up by NIH program staff) were quite productive.

There is a lot more to digest in this paper. I welcome reactions and questions.

UPDATE

Aaron and Drugmonkey commented on the fact that the figure showed an expected value of 40 publications for grants at the 1st percentile. As I noted in the post and in the comments, the analysis depends on a parameter α which does not affect the conclusion about the predictive power of percentile scores but does affect the appearance of the curves. When I first started analyzing this paper, I estimated α to be 3.7 by eye and did not go back and do a reality check on this.

The supplementary material for the paper includes a histogram of the number of publications per grant shown below:

Publications

This shows the actual distribution of publications from this data set.

From this distribution, the value of α can be estimated to be 2.2. This leads to revised plots for the expected number of publications at the 1st percentile and an overall expected number of publications per grant shown below:

Pubs-k1-revised graphPredicted Pub Graph

These data are consistent with results that I obtained in my earlier analysis of one set of NIGMS grants.

I am sorry for the confusion caused by my rushing the analysis.

22 responses so far

Models of Support for Staff Scientist Positions-Matching Funds?

(by datahound) Apr 14 2015

Lots of interesting ideas coming in to my previous post on models of support for staff scientist positions.

Let me add an idea into the mix. Suppose that NIH were to develop a training grant-like mechanism for staff scientists with additional conditions:

(1) No more than XX% of the salary and benefits of each staff scientist position can be supported from the grant with additional funds coming from other grants or institutional funds.

(2) Any staff scientist supported from the grant is guaranteed full support for a period of YY years after the termination of the grant by the institution.

For the purposes of discussion, set XX% = 50% and YY = 2. The idea is to have both NIH and the institution have substantial "skin in the game" with the staff scientists and their positions as the beneficiaries.

As a disclaimer, I do not know enough about the legal framework that could influence how the details of this plan.

Thoughts?

 

23 responses so far

Models of Support for Staff Scientist Positions

(by datahound) Apr 13 2015

The topic of support of staff scientists has been discussed extensively recently. NCI announced its intention of initiating a new mechanism for the support of such positions (discussed extensively at Drugmonkey). A poll recently showed that 77% of respondents favored creating more staff scientists as a way of dealing with the present postdoc situation.

I recently tweeted a question about the potential of block grants as a mechanism of supporting staff scientist positions. This idea came out of discussions that I had years ago during the NIH "Enhancing Peer Review" process. As today, the discussions centered around how to stabilize staff scientist positions as a career path. The block grant model was proposed as a potential alternative to individual awards such as those to be piloted by NCI. The concerns about an individual award model were (1) the position is still only as stable as a single grant and (2) the criteria for reviewing staff scientists in conjunction with their environment (associated PI, etc.) appeared to be hard to manage. The use of larger grant to an institution supporting a cadre of staff scientists could diminish some of these concerns since such grants could be more stable and could be judged over time by criteria related to the stabilization of staff scientist positions. One obvious downside is that the institutions would be responsible for selecting the staff scientists to be supported with only indirect outside influences.

What are your thoughts about individual awards versus institutional awards for staff scientists? How could an institutional award be structured to best achieve the goals of creating a larger number of more stable staff scientist positions?

31 responses so far

« Newer posts Older posts »