R01 and Other Mechanism Funding Trends 1998-2013

(by datahound) Nov 20 2014

In my most recent post, I noted that the percentage of the overall NIH budget going to R01-equivalent awards dropped from 43.8% in FY1998 to 37.2% in FY2003 to 34.7% in FY2013. Here, I present more details about the trends that have led to this drop. Below is a plot of expenditures in different categories from FY1998 to FY2013.

1998-2013 All


In addition to R01 equivalent grants (R01s, R37s, R29s), the categories include Research Centers, R&D Contracts, Intramural Research, Training (Ts and Fs), Career Awards, Research Project Grants (RPGs) other than R01-equivalent grants, SBIR/STTR grants, Other Research (e.g. R25s), and Research Management and Supports (e.g. extramural staff salaries, other administrative costs, review costs).

The categories other that R01 equivalent grants are shown with an expanded scale below:

1998-2013 Blowup

The graphs show that the percentage of R01 equivalent fell in two different phases. The first phase occurred during the NIH doubling whereas the second phase occurred over the subsequent decade.

The percentages and differences in the first and second phases in the various categories are tabulated below:

Category FY1998 FY2003 FY2013 2003-1998 2013-1998
R01 equivalent 43.8 37.2 34.7 -6.6 -2.5
Centers 8.6 9.2 8.3 0.6 -0.9
R&D Contracts 5.8 8.5 10.0 2.7 1.5
Intramural 10.5 9.4 11.1 -1.1 1.7
Training 3.1 2.7 2.5 -0.4 -0.2
Career 1.7 2.1 2.1 0.4 0.0
RPG Non-R01 equiv 11.2 14.4 15.9 3.2 1.5
SBIR/STTR 2.0 2.0 2.2 0.0 0.2
Other Research 4.6 6.0 6.2 1.4 0.2
RMS 3.6 3.4 5.1 -0.2 1.7

Note that the categories do not total 100% as some small categories such as construction are not shown.

During the first phase, the percentage of the overall NIH budget going to R01 equivalent awards fell by 6.6%. Over this period, the amount going to other categories of RPGs grew by 3.2% (to be discussed below), the percentage going to R&D Contracts grew by 2.7%, and the percentage going to Other Research grew by 1.4%.

During the second phase, the percentage going to R01 equivalent awards fell by an additional 2.5%. This was associated with an additional increase of 1.5% going to R%D contracts, an increase of 1.5% going to non-R01 RPGs, and increase of 1.7% going to Intramural Research (part of which is due to an accounting change), and a 1.5% increase in Research Management and Support.

The activity codes that make up the increase in non-R01 equivalent RPGs are shown below:

Other RPG mechs

Over this period, the largest contributors to the increase are cooperative agreements (which have significant NIH staff involvement), primarily U01s but also U19s and the growth in R21s (driven by a large increase in the number of applications). The investment in Program Project grants (P01s) actually fell over this period. The addition of new mechanisms including NIH Director's Pioneer (DP1) and New Innovator (DP2) awards as well as R00 made a small contribution to the growth.

These data are aggregated across all of NIH. As the previous post showed, there are significant differences between ICs. Further analysis will be required to analyze these trends.

No responses yet

R01 Success Rates by Institute and Center

(by datahound) Nov 20 2014

Each NIH Institute and Center (IC) has its own leadership and, to some extent, policies, practices, and priorities. Furthermore, each receives a separate appropriation from Congress. Because of these influences, factors of considerable interest to the scientific community vary from one IC to another.

One such factor is the success rate for R01 applications. Note that "success rate" is related to but different from "payline". The success rate is the number of awards made in a given fiscal year divided by the number of applications reviewed in that year. The number of applications reviewed includes both scored and unscored applications but applications for the same grant (e.g. an A0 and an A1) reviewed in the same year are only counted once. The payline is a percentile threshold that some ICs use in a manner that almost all applications that score better than the payline are funded. This difference has been the source of much confusion over the years. Rock TalkNIGMS, NIAID, and likely other ICs) have posted explanations.

Success rate data are available through NIH RePORT. The data for FY 2014 (which ended September 30, 2014) are not yet available. For FY 2013, the results for R01 success rates are as follows:

NIH Overall 17%

FIC 0.19
NCCAM 0.12
NCI 0.15
NEI 0.28
NHGRI 0.28
NHLBI 0.16
NIA 0.13
NIAAA 0.21
NIAID 0.15
NIAMS 0.17
NIBIB 0.17
NICHD 0.12
NIDA 0.19
NIDCD 0.26
NIDCR 0.22
NIDDK 0.18
NIEHS 0.15
NIGMS 0.21
NIMH 0.19
NIMHD 0.04
NINDS 0.20
NINR 0.12
NLM 0.16


Note that these range from 4% to 28%. Some ICs have rather specialized missions and do not accept many R01 applications. These include NCCAM, NCATS, NHGRI, NIMHD, NINR, and NLM. These will be excluded from the rest of this analysis.

What characteristics of the ICs correlate with R01 success rate? One possibility is the overall size of the IC budget. Success rates are shown below with the ICs ordered in terms of budget size.

R01 SR versus Budget


There is, in fact, a negative correlation (correlation coefficient -.39) such that larger ICs tend to have lower R01 success rates.

An alternative is the IC investment (in $) in R01s. Success rates are plotted below with the ICs ordered in terms of the size of the R01 investment.

R01 SR vs R01 investment

While there is an increasing trend for the ICs with a smaller R01 investment, the overall correlation is again negative (correlation coefficient -0.23).

A final characteristic is the percentage of the IC budget committed to R01 funding. These percentages ranged from 23.6% to 58.3% in FY2013.

A plot versus these percentages is shown below:

R01 SR vs Percent RO1

Here, a substantial positive correlation is observed with a correlation coefficient of 0.70.

Note that the some of the larger ICs with relatively low percentages of their budgets invested in R01s have considerable other responsibilities for infrastructure or specialized programs. Nonetheless, these data reveal one potential degree of freedom that could help mitigate the historically low R01 success rates that we are now experiencing.

Note that the overall investment in R01 equivalent grants (as a percentage of the NIH budget) has declined over the years. At the start of the doubling (FY1998), this percentage was 43.8%. At the end of the doubling (FY2003), this had declined to 37.2%. In FY2013, it stood at 34.7%.


10 responses so far

A View from the Perspective of a National Academy of Sciences Section on R01 Funding Outcomes

(by datahound) Nov 14 2014

Throughout discussions of the NIH peer review system, it has been clear (and not at all surprising) that different individuals and groups have quite different perspectives on the NIH enterprise. This goes back to the "Enhancing Peer Review" process (and well before) and continues to the present with changes in policies and active discussions in a scientific society membership magazine (here,  here, and here) and in blogs (e.g. here),. For example, scientists who are early in their careers at present, in general, have different experiences, perspectives, and concerns than those of more established investigators. Much of the data that are presented reflect the entire NIH investigator pool although, in some cases, data for more limited subsets such as new investigators are discussed. Here, I analyze the outcomes for a different subgroup, namely the members of one section of the National Academy of Sciences.

The National Academy of Sciences is divided into sections, based on scientific field. The section I examined has 122 members at present. Let us first examine the characteristics of the group. The distribution of the members over the years in which they were elected to the National Academy is shown below:

Year elected histogram-2

The median for this distribution is 1995-1996.

The birth years for approximately 50% of these members were available online. In other cases, birth years were estimated from the year of receipt of his/her bachelor's degree or Ph.D. degree assuming an age of 22 for the receipt of a bachelor's degree and an age of 27 for a Ph.D. degree. Based on these estimated birth years, the median age at the time of election was 51 years old with a range of 38 to 71.

Of the 122 members, 108 had received substantial funding from the NIH extramural program during the period covered by NIH RePORTER (1991-present), 8 are or were members of the NIH intramural program and 6 did not receive substantial NIH funding during the period covered by NIH RePORTER.

The age distributions of research-active and research-inactive scientists is shown below:

Age distribution plot-2

Of the research-active investigators with estimated ages over 76, three are funded by the NIH extramural program, three are in the NIH intramural program, and one is active in industry.

Examination of the R01 funding records of the 108 investigators with extramural funding revealed some striking characteristics. Many of these investigators have had long-running R01 grants with the median for each investigator's longest running grant (largest suffix) of 26 years with a range of 4 to 49. 64 of 108 (59%) of the extramurally funded investigators had received one (or more) MERIT (R37) awards (extending an R01 grant for an additional cycle without a competing renewal) during the course of their career. 27 of 108 (25%) of these scientists have been HHMI investigators with all but two of these currently in such positions.

How have these investigators fared with regard to having applications funded in their initial (A0) versus amended (A1, A2) submissions? I recently posted data showing the percentage of A0, A1, and A2+ applications in the NIH-wide funded R01 grant pools from 1991 to 2014. The results for competing renewal (Type 2) grants for the 108 National Academy investigators are compared with the NIH-wide curves below:

NAS-Type2 graph

The average percentage of funded R01 grants in the A0 pool for the National Academy members averaged 94% from 1991 to 2003 compared with 60% for all NIH investigators. The A0 curve for the National Academy members roughly follows the NIH-wide curve with a dip from 2004 to 2011. Overall, these two curves show an average difference of approximately 35%.

The results for new (Type 1) grants are shown below:

NAS-Type1 graph

These data are somewhat noisy since those from the National Academy members come from only 130 new R01 grants. Again, however, the pool of R01 grants from the National Academy members include more A0 awards. From 1991 to 2003, the average percentage of A0 awards was 80% compared with 57% for the NIH-wide pool . The percentage of A0 awards for the National Academy members dipped from 2005 to 2008 and then recovered, tracking the NIH-wide curve. Overall, the average difference between the two A0 percentage curves is 30%.

The members of the National Academy of Sciences examined are a remarkable group of scientists, with important contributions throughout their careers and impressive recognition of their work including several Nobel Prizes. The purpose of this analysis was to put some quantitative context around the differences in experiences of these investigators with an average member of the overall NIH R01 grantee pool.

It is very important to note the positive feedback loops at work here. Success in getting funded and doing exciting science makes it easier to get additional funding without as much time commitment to grant writing. The time saved can then be spent doing more and better science, feeding back into the process again.


9 responses so far

Data Regarding the A2/No A2 Policy

(by datahound) Nov 13 2014

Few NIH policy issues have elicited as much discussion and passion as the elimination of the option of submitting an "A2" application. This policy had its roots in the NIH "Enhancing Peer Review" process. During this process, NIH staff presented the data below, demonstrating that the mixture of A0, A1, and A2 applications within the pool of funded R01 grants had been shifting dramatically in the post-doubling period with the fraction of R01s funded at the A0 stage dropping from near 60% to below 30%.

Screen Shot 2014-11-12 at 8.53.00 PM

These data demonstrated increasing inefficiency in the NIH funding system as applications that were quite likely to be funded eventually were being queued up with A1 and, particularly, A2 applications receiving better scores and getting funded in preference to A0 applications that "had more chances" to come in for reconsideration. These data, together with other analyses and discussion, led the Enhancing Peer Review committee to recommend that the NIH "consider all applications be being new." This proposal was not well received by the scientific community including FASEB. Instead, NIH announced that only a single amended application would be allowed and proposals not funded after two attempts would have to be "substantially re-designed."

After implementation of this policy, many in the scientific community raised concerns about its consequences, in part through a petition with more than 2000 signatures. Nonetheless, NIH Deputy Director of Extramural Research Sally Rockey and NIH Principal Deputy Director Larry Tabak posted a progress report showing that the policy "was working" including the figure below:

Screen Shot 2014-11-11 at 4.10.42 PM

The key point from this figure was not that the percentage of A2 applications in the funded pool was dropping (this was inevitable given the policy) but rather that the percentage of A0 applications was increasing while the percentage of A1 applications was essentially static.

Much to everyone's surprise, in April, NIH announced a reversal of this policy, allowing unsuccessful A1 applications to be resubmitted without the need for any change as new, A0 applications.

What might have been the basis for this change? I extended the data analysis through fiscal year 2013 (when the new policy was announced) as shown below:

Type1-2000-2013 graph-2


These data show that, in the three years after the initial progress report (shown with darker lines), the trends had changed with the percentage of A1 applications in the funded pool surpassing the percentage of A0 applications. The percentage of A0 applications in the funded pool was only slightly over 40% (even with no A2s), well below the 60% historical value.

These data are just for new (Type 1) grants and not for competing renewal (Type 2) grants. For completeness, I present the results for both Type 1 and Type 2 grants, going back to 1991 (when data are first available in RePORTER) to 2014 below:

1991-2014 Type 1-2 graph

This graph reveals that the distributions were relatively stable before and during the NIH budget doubling. Then the percentage of new (Type 1) A0 awards fell dramatically with the competing renewal (Type 2) A0 drop lagging by a year or two. The curves for Type 1 and Type 2 awards have been essentially coincident since the implementation of the "No-A2" policy and its elimination. It will be interesting to see how these curves evolve although the new policy with new, new applications and recycled new applications will introduce some lack of clarity about interpretation.

5 responses so far

Discuss Upsides and Downsides of K99-R00 Program

(by datahound) Nov 07 2014

Given all the data that I have recently posted about the K99-Roo program (transitions to academic positions, transitions to R01 funding, distributions across institutes and centers and academic institutions, publication patterns, comparison to F32 program), I would be interested in any reactions to this program. Should it be expanded, contracted, modified? Why? Are their other data or analyses that would be informative?

41 responses so far

K99-R00 program: F32 awardees as a control group

(by datahound) Nov 06 2014

I recently posted a series of analyses on the K99-R00 program, particularly the FY2007 cohort (the first year of the program). In my initial post on this program, I noted that recipients of F32 awards during the same period might make a reasonable group to compare to the K99-R00 recipients. I have now performed some of this analysis.

In FY2007, there were 612 new F32 awardees across NIH. Of these, 143 have some post-F32 award in NIH RePORTER. However, these investigators do not reflect the full F32 cohort who have gone into academia as others may be in academic positions, but have not received NIH grants. Unfortunately, information for these individuals is not readily available through a public database. However, through the wonders of the internet, search engines, and social media sites such as LinkedIn, I was to able to identify the likely current positions of approximately 90% of the FY2007 awardees. The representation of those in academic positions is likely close to 100% given the higher web profiles of those in academia compared with those in other sectors.

I was able to identify 304 individuals with an academic positions likely suitable for applying for grants. Thus, it appears that approximately 50% of the F32 awardees transitioned to academic positions after the completion of their training. This compares with 93% of the K99 awardees who transitioned to the R00 phase as well as several others who moved to academic positions (often outside the United States) without activating the R00 phase of the K99-R00 award. Thus, it appears that while the K99 mechanism was nearly 100% efficient in transitioning investigators to academic positions, only about half of this F32 cohort made this transition.

Other individuals have moved to positions in the pharmaceutical and biotech industries, some physicians have moved into private practice settings, and some  individuals have scientific staff positions. Approximately 25 individuals still appear to be in listed in positions categorized as postdocs, even 7 years after the initial F32 award.

Did the F32 cohort come from the same distribution of institutions as the K99 cohort? The distribution of institutions, based on rankings of NIH funding levels, are shown below:

F32-K99 institution plot

These distributions are relatively similar to one another with the median institutional rank for the F32 awardees at 33 compared with 23 for this cohort of K99 awardees. Given the imperfections of institutional NIH funding rank as a measure as well as the relatively small sample sizes, it is not clear how significant this difference is.

How about the distribution of the academic positions obtained by the F32 awardees compared to the K99 cohort? These are compared below:

F32-K99 Academic Institutions

These distributions are more different. In particular, the F32 awardee distribution shows a significant number of investigators who work at institutions with relatively low NIH funding rank. These are largely institutions that have quite substantial undergraduate teaching missions.

What fraction of the F32 cohort have obtained R-mechanism or similar funding? 86 investigators have obtained at least one R01, DP2, R21, R15, R03, or R56 award and two have obtained positions and funding within the NIH intramural program. Of these, 60 of these have received R01 or DP2 (NIH Director's New Innovator) awards.

In addition, a considerable number of this cohort received K awards subsequent to their F32 award including 33 K99 awards and 41 other (K01, K07, K08, K22, K23) K awards.

For comparison, 120 of the 182 2007 K99 cohort members have obtained at least one R01, DP1, DP2, R21, R15, R03, or R56 award and one moved to the NIH intramural program. Of these, 108 of these have received R01, DP1, or DP2 awards.

The overall funding patterns as a function of time are compared below:

F32 K99 funding curves

These curves show the overall amount of funding in different categories (F32, K awards except K99, K99, R00, R01 equivalent (R01, DP1, DP2), and all other awards) normalized to the number of investigators in each cohort. Note that the scale is 5 times larger for the K99 cohort than for the F32 cohort. These curves show the initial investment in F32 or K99 awards, respectively, followed by the time evolution of other types of funding.

The amount of R01 equivalent funding per investigator for the K99 cohort is nearly six times higher than that for the F99 cohort. This is a consequence of the two-fold difference in obtaining academic positions (~100% versus ~50%) and the three-fold (108/182 versus 60/304) difference in obtaining R01 given an academic position.

It also appears that the K99-R00 cohort are become independent investigators earlier, certainly as measured by the onset of R00 funding and also as measured by a one- to two-year shift in the time for obtaining R01 or other funding.

Update:  A discussion on Twitter involved examining the costs and outputs for the F32 and K99 programs.

Here are the data related to the graphs above:

F32 program:  Total cost of F32s: $64.89M; K Awards: $27.85M; R00: $20.88M; R01 equiv: $67.61M; Other grants, $29.91M

K99 program: Total cost of K99s: $28.86M; R00: $125.68M; R01 equiv: $149.02M; Other grants, $30.65M

If one considers the R00 total as a cost (since this is a commitment made by NIH at the time of the K99 award), the ratio of Output (Ro1 equiv + Other grants) to Cost (F32 + K + R00) is (67.61+29.91)/(64.89+27.85+20.88) = 0.86 for the F32 program. Recall that 33 F32 recipients in this cohort went on to receive K99 awards.

Similarly, Output/Cost = (149.02+30.65)/(28.86 + 125.68) = 1.16

Of course, R00 represents research grants, so they could also be considered as output rather than cost. Under these conditions, Output/Cost = (67.61+29.91+20.88)/(64.89+27.85) = 1.28 for the F32 program and Output/Cost = (149.02+30.65+125.68)/(28.86) = 10.58.

As a compromise between these views, the R00 value could not be considered as either a cost or an output. Under these conditions, Output/Cost = 1.05 for the F32 program and 6.23 for the K99 program.

Note that these Output/Cost ratios are not fixed, but will increase over time as outputs grow while Costs are fixed.

12 responses so far

K99-R00 Publication Analysis-Part 7-High Profile Publications

(by datahound) Oct 28 2014

In my first post in this series, I noted that approximately 20% of the 2007 cohort of K99 awardees had a "very high profile" (Science, Nature, Cell, New England Journal of Medicine, or JAMA) publication and 40% had some other "high profile" (PNAS, other Cell or Nature journal, Journal of Clinical Investigation) publication during or prior to 2007. Following comments, I noted in a later post that the distribution was not uniform across investigators supported by the various NIH institutes and centers.

Here, I examine such "high profile" publications both before and after the K99 awards. The distribution of the number of "high profile" publications across a pool of 132 K99-R00 investigators who received their K99 awards in FY2007 is shown below:

Glamor pub histogram-2

The average number of Science, Nature, Cell, NEJM, or JAMA papers is 0.8 per investigator with a range of 0 to 9. More than 60% of these investigators have not published in Science, Nature, Cell, NEJM, or JAMA, either before or after receiving the K99 award. The average number of other "high profile" publications is 2.6 per investigator with a range of 0 to 26.

To depict the relationship between the number of high profile publication before and after receiving the K99 award, the number of all "high profile" publications after 2008 is plotted versus the number of all "high profile" publications in 2008 or before is plotted below:

Correlation glamor

The overall correlation coefficient is 0.59. Note that several investigators who had not published any "high profile" publications prior to receiving the K99 award have published such papers subsequently and many of these were published during the R00 phase or subsequently.

Updated:  In response to comments on Twitter, I looked as last author publications. Of the 47 Science, Nature, Cell, NEJM, JAMA papers published after 2008, 17 had the K99 awardee as last author. Of the 174 other "high profile" publications published after 2008, 70 had the K99 awardee as last author.

Below is a plot of the number of "high profile" last author publications versus the number of "high profile" first author publications. The numbers on the graph correspond to the numbers of investigators represented by each data point.

Glamor-First-Last plot

The correlation coefficient is 0.47.

No responses yet

K99-R00 Publication Analysis-Part 6-First and Last Authorships

(by datahound) Oct 27 2014

The K99-R00 program in an interesting one from the point of view of evaluation since this program has cohorts of investigators who begin at approximately the same career stage and then progress through several transitions into independent careers. These different career stages should be reflected in different positions in the author list for publications. Below is a plot of the total number of publications, the number of first author publications, the number of last author publications, and the number of sole author publications per investigator per year for 132 investigators who received K99 awards in FY2007 and went on to receive an R00 award. No attempt was made to correct for joint first authors or for joint corresponding authors.

All Pubs-First,Last-plot-Large

As anticipated, this plot reveals that, prior to receiving the K99 award, these investigators (on average) publish an approximately equal mixture of first author and middle author publications with very few last author publications. A year after the receipt of the K99 award, the number of first author publications per year begins to drop while the number of last author publications grows. This increase in last author publications is associated with an increase in the average number of total publications per year. Note that the dotted lines leading to 2014 results reflects that fact that the data do not include all publications for calendar year 2014.

These data reflect the full cohort of 2007 K99 awardees for which reliable publication data were compiled without regard to the year that each investigator transitioned to the R00 phase. The data separated according to R00 groups, with 50 investigators who received R00 awards in FY2007 or FY2008, 63 investigators who received R00 awards in FY2009, and 19 investigators who received R00 awards in FY2010 or later, are shown in the plot below:

Pubs-R00 year-First,Last-3

Again, as anticipated, the transition from first author publications to last author publications occurs earliest for the 2007,8 R00 cohort, later for the 2009 R00 cohort, and latest for the 2010,11 R00 cohort.

In addition, the first plot includes both investigators who have been successful in obtain R01s or similar awards from NIH as well as those who have not. These results for these two groups are shown below:

Pubs-R01-NoR01 plot

The number of first author publications appears to be slightly higher  for those who have received R01s than for those who have not. The number of  last author and other publications is higher for those who have received R01s after receiving the K99 awards, but this likely largely a consequence of the science performed with the supports of the R01 support.

11 responses so far

K99-R00 Publication Analysis-Part 4-Revisited

(by datahound) Oct 24 2014

Spurred, in part, by Drugmonkey's post, I have been thinking further about the analysis I did relating the number of publications to the number of authors per publication. I realize that I did not fully grasp the implications of my results. A key question is whether an increased number of publications per investigator can be accounted for by an increased number of authors per publication.

Two limiting cases can be considered. In the first case, the average number of authors per publication would be essentially constant, regardless of the number of publications by a given investigator. This case is ruled out by the data presented that show that the average number of authors is positively correlated with the number of publications with a correlation coefficient of 0.47.

In the second case, the average number of authors per publication increases directly with the number with the number of publications. In this case, the plot of the number of publications weighted by 1/the number of authors versus the number of publications (the plot highlighted by Drugmonkey) would be a line with slope 0. However, the trend line in the plot has a substantial positive slope. Previously, I focused on the fact that the correlation coefficient for this plot is large (0.83). However, this is not really the point. It is not surprising that the weighted number of publications is relatively well correlated with the number of publications. It is the slope of the line that conveys the information.

Simulations suggest that the slope line in this plot is close to what one would expect if the number of authors per publication were constant. Another way to see the same point is to consider the average number of authors for investigators with the smallest and largest number of publications.

20 investigators with the fewest publications:  Average number of publications-9.1, Average number of authors per publication-4.6

20 investigators with the most publications:  Average number of publications-57.4-Average number of authors per publication-7.3

Thus, while the number of publications increases by a factor of 57.4/9.1 = 6.3, the average number of authors per publication increased by a factor of 7.5/4.6 = 1.6.

These data do not support the notion that the increased number of publications is due primarily to an increased number of authors per publication.

As I posted subsequently, a major factor contributing to the number of publication is the amount of support that each investigator has been able to garner.

5 responses so far

K99-R00 Publication Analysis-Part 5-NIH Funding

(by datahound) Oct 24 2014

In analyzing the publication patterns of K99-R00 awardees, one crucial factor is the amount of funding that each investigator has obtained to support his/her research. Each K99-Roo investigator likely has received a comparable level of support through the K99-R00 mechanism itself. However, different investigators have been more or less successful in garnering other NIH support such as R01s or other grants. This information can be relatively easily gathered through NIH RePORTER. This is only part of the funding story as each investigator likely received start-up funds as part of their transition from postdoc to independent faculty member. Furthermore, some investigators likely have been able to obtain other sources of funding such as NSF and private foundations. With those disclaimers, below is a plot of the number of publications versus the total amount of NIH support obtained beginning with the K99 award in 2007 and extending to the present (not including funding from grants with co-PIs which affects a few investigators).

Pubs-Funding plot


Sharp-eyed readers may note the absence of the investigator with the most (95) publications from my initial post on this subject. This investigator only received funding through the R00 phase before moving to a position in Europe so I removed this data point from the plot. The correlation coefficient for this parameters is 0.30. The correlation improves slightly t0 0.32 if each publication is weighted by 1/number of authors as described in the previous post.

This plot includes all publications for each investigators. However, publications prior to receiving the K99 award were not supported by these funds. Below is a plot of publications from 2008-present versus total NIH support.

Pubs-2008 vs Funding Plot

Restricting the results to these publications results in the correlation coefficient increasing slightly to 0.33.

These data support the not-at-all-surprising conclusion that the total number of publications is roughly associated with the amount of financial support for the research. The lack of better correlation likely involves a combination of factors including the absence of information about start-up funds and non-NIH support, the different costs of research in different fields, and differences in publication styles between investigators.

2 responses so far

Older posts »