NIH "High Risk" Programs-Part 3-New Innovator Award Program

Oct 26 2015 Published by under Uncategorized

I have previously discussed the history of the NIH Director's Pioneer Award program. The daughter program of the Pioneer program is the NIH Director's New Innovator Award program. This program was created in 2007 when Congress was working on an appropriations package in February for the fiscal year that started the previous October. I was involved in the conception of this program and I recall quite clearly how I was sitting in a meeting when someone from the NIH Director Elias Zerhouni's office came in and asked if I could spare a few minutes to speak with Dr. Zerhouni and the NIH Budget Director. Needless to say, I said that I could. In the course of the budget negotiations, Congressional members and staff had heard Dr. Zerhouni's strong concern about the plight of young investigators and had been pleased with the initial progress with the Pioneer program. They wanted to know if NIH could establish a "junior Pioneer" program if they provided some additional funding. I was asked what I thought of the idea and if we could set up such a program. Again, needless to say, I indicated that I thought we could do that. The bill was passed with $40M for this new program in the budget for the NIH Common Fund.

This was mid-February and this program has to be set up from scratch and operationalized to get these funds out the door by the end of the fiscal year (September 30, 2007). After the meeting, I walked back to NIGMS and found Judith Greenberg and others involved with the Pioneer program (which was now being administered through NIGMS) and told them that I had so much faith in them that I had committed us to build this new program on this very short timeline. I am not sure they were entirely thrilled with me but, as always, they rose to the occasion and got right to work on the many tasks that needed to be tackled.

The first key question was how to target the program to young investigators since basing eligibility on age was clearly not legal. We came up with the idea of using time since an individual had received their "terminal degree" (typically PhD or MD). But, what was the appropriate time period? After some discussion and a little analysis, we settled on 10 years (with some exceptions for clinical training or time off for family responsibilities). This decision became the basis for the subsequent NIH definition of an "early stage investigator (ESI)".

The second key question was how large the awards should be. The NIH Budget Director John Bartrum had very cleverly required that this awards be made with a new mechanism, the DP2. This mechanism is quite unusual in that it allows NIH to commit funds for multiple years from the same fiscal year. This was done to avoid adding to the commitment base with these awards. Normally, when an institute funds a multi-year award in one fiscal year, this commits the institute to funding out years of the award in subsequent fiscal years. Since NIH does not know the appropriation level for the next fiscal years, this can be a real challenge. Making lots of commitments and then receiving a poor appropriation the next year can limit the number of new and competing grants substantially, leading to low success rates and other issues. With the new DP2 mechanism, NIH could fund each entire award out of one fiscal year without taking on any new commitments. The decision was made that the awards be $1.5 M direct costs over 5 years (but paid to the institution in the first year). Thus, with $40M and an estimated average grant size of $1.5M direct plus ~$1.M indirect) = $2.5M, we expected to be able to make about 14-16 awards.

The next challenge was writing the Request for Applications and getting the word out about this program so that eligible individual would have some time to get their applications conceived and submitted. It is a testament to Judith Greenberg and her coworkers that the RFA was published on March 9, 2007, approximately 3 weeks after the bill was passed. Trust me, the NIH bureaucracy does not normally work that fast. We worked with our communication staff to publicize the funding opportunity through every reasonable channel we could think of since our fear was that eligible folks would not find out about the program in time to submit applications. Applications were due on May 22 with this short period dictated by the need to get the applications processed, reviewed, and funding decisions made by September 30. Our publicity approach was successful when we realized that we had 2153 applications in by the deadline. This, of course, created a new challenge with funds for 14 awards and 2153 application leading to a projected success rate of a whopping 0.6%.

We had taken the lessons that we had learned through the Pioneer program and made sure that all stages of the process emphasized that innovative research could be expected from scientists of all genders and from diverse backgrounds. The review process (again, set up on a short timescale) involved a 2 phase electronic review. No interviews were included (in distinction to the Pioneer award). We spent considerable time orienting each group of reviewers so that they understood the vision for the program in all aspects. When the reviews were completed, we has an outstanding set of highly ranked applications. I set to work soliciting funds from institutes to increase the number of awards. This was challenging since these awards cost about $2.5M a pop but the lack of out-commitments was a selling point. By September 30, we were able to make a total of 30 awards to some outstanding scientists including 12 women and 18 men in a range of fields. I had the privilege of personally calling these individuals to tell them that they would likely be receiving these awards (and getting some additional information to make sure that they were still eligible).

Based on the successful launch, funds were provided for the New Innovator program in subsequent years. After the program had been running for three years, we initiated an outside process evaluation, parallel to the one for the Pioneer program. This report contains considerable information about the program, including comparisons of the applicant, finalist, and awardee pools.

New Innovator Gender

There were no statistically significant differences between the compositions of the applicant and awardees pools with regard to either gender or race/ethnicity.

The New Innovator program continues to the present with 497 applicants and 41 awards in fiscal year 2015. The goal of enabling young scientists to get off to a running start in hugely important and others feel the same way. Bruce Alberts, when he was Editor of Science magazine (and a reviewer for the New Innovator program) called for increasing the number of awards to 500 per year. One of the most touching (but also distressing) experiences that I had with this program was receiving several emails from unsuccessful applicants telling me how much fun it had been to write an application about what they actually want to achieve as opposed to what they thought they could get funded to do. This seems to me to be a significant indictment of the current state of affairs. In addition, of course, it is essential that support is available to sustain these careers once they are launched.

6 responses so far

  • Jim Woodgett says:

    With the dropping success rates, many have made the point that peer review basically becomes a lottery below 15% or so. With the first competition having an overall success rate of 0.6% (increased to 1.4% with your efforts to convince other Institutes to chip in), surely the strains on peer review where enormous and acute. A two tiered review approach likely helped a little, but how confident are you that the awardees were significantly different from/better than the next cohort just outside the line? Applies to all grant competitions, but even more acute here. Any data yet on how well these awardees are doing compared to those who just missed the line?

    • datahound says:

      I certainly do not believe that there are significant differences in "applicant quality" at this level of discrimination. On the other hand, the difference between getting an award and not getting one is likely substantial. I do not know of any analysis comparing those who received the awards and those who just missed. But it is very likely that those who received the awards are doing better since having $1.5M in research support plus the prestige has to be a good thing, on average, compared to not having those things.

  • eeke says:

    Why did the number of applicants drop so much? I applied the first year (2007), but was no longer eligible after that (lost ESI status, which for me, lasted only about 6 months as soon as it was established). Is the dismal success rate turning people away?

    I share the sentiment about the science I'd like to do vs the science that is more likely to bring in the money. I feel as though I've been doing this job with one hand tied behind my back, it's awful, and I'm looking for a way out. Science is supposed to be fun, and this hasn't been fun for a long time.

    • datahound says:

      I think the success rate had a lot to do with it. With that said, I think many more eligible investigators should apply. It is a 10 page essay on what you have accomplished and what you would like to do with a chance of getting 5 years of funding at a reasonable level.

  • Jim Woodgett says:

    Recalled this paper: http://journals.plos.org/plosone/article?id=10.1371/journal.pone.0118494

    The last paragraph (recommendations):

    "Because a 20% funding rate will force at least half of all proposers to abandon federally funded research after multiple years of effort, we recommend that proposers, research mentors, and funding agencies compare current funding rates to this value. We suggest that individual investigators should consider avoiding proposing to programs with funding rates at or below 20% unless they are confident that their research program has a greater-than-baseline chance of success or they are willing to write two or more proposals per year. If researchers have a recent history of writing successful or unsuccessful proposals to this grant agency, we suggest that they consider our discussion of conditional probabilities as it pertains to their situation. We recommend that department heads and research administrators think carefully about which researchers they should guide toward low funding rate programs, basing their decisions on realistic chances of success and time available for writing proposals. We note that 20% funding rates impose a substantial opportunity cost on researchers by wasting a large fraction of the available research time for at least half of our scientists, reducing national scientific output, and driving many capable scientists away from productive and potentially valuable lines of research."

    Clearly, this is infeasible as is, as it expects scientists to voluntarily withdraw from applying for the majority of grant competitions and hang up their hats. Many investigators go out swinging or find other ways to sustain their work. But the result may be the same in terms of low funding rates squeezing out an excess researchers. There is a dynamic equilibrium between the funds available and amount of high quality science that can be done and that optimizes to some ideal level of scientists - where the lesser effective beasts are replaced by more effective beasts. It must be highly modelable.

    • datahound says:

      Thanks for sharing this; I had not seen it.

      I do not know of many programs that have success rates over 20% at present. I also think that writing two or more proposals per year is not such a burden. Clearly, an optimal system would have success rates at 20% or above so that researchers were not spending so much time writing and reviewing proposals. But it is less obvious how to get from where we are to such a system. I think funding agencies should take a hard look at large programs to make sure that they are getting their (our) money's worth given that the opportunity costs for taking such funds and supporting investigator-initiated programs is so high with so much outstanding potential science and such low success rates. This is not to say that some large programs do not deliver substantial value, just that the bar should be quite high.

Leave a Reply