Archive for: June, 2014

NIH Mechanism Tables

Jun 30 2014 Published by under Uncategorized

In a comment to my recent post, Dave asked how the fraction of the NIH appropriation going to different areas of NIH activity could be easily quantified. The answer to this question is "yes and no." The key source about the NIH budget is called a "mechanism table". These are developed each year and are available from the NIH Budget Office (and from at least some IC websites). Below are NIH-wide mechanism tables for FY2003 and FY2013:

Mechanism Table 2003-2013

The mechanism table has the budget broken down into major categories including Research Grants, Research Centers, Other Research, Training Awards, Intramural Research, Research Management and Support (administrative costs for running NIH) and a number of other smaller categories. The distribution among these different categories for NIH as a whole for FY2013 is shown below:

Pie Chart-2013

 

The largest categories for FY2013 are Research Project Grants (51.0%), Intramural Research (11.1%), Contracts (10.0%), and Research Centers (9.3%).

These percentages were not vastly different in FY2003: Research Project Grants (51.4%), Intramural Research (9.6%), Contracts (8.6%), and Research Centers (9.1%). Thus, the fraction of the budget going to Research Project Grants dropped from 51.4 to 51.0% over this decade while Intramural Research grew from 9.6% to 11.1%. These data must be examined carefully before interpretation. For example, the growth in Intramural Research is due, in part, to an accounting change wherein the entire National Library of Medicine budget moved from a separate item to Intramural Research. The Intramural Research plus National Library of Medicine budget was 10.7% of the overall budget in FY2003.

Examination of the NIH-wide mechanism tables over time provides look at changes in NIH investment strategy in very broad strokes over time. In further posts, I will examine these changes, differences in mechanism tables between Institutes and Centers, and more breakdowns of the Research Project Grant pool.

2 responses so far

A0, A1, and A2 R01 Awards from 2003-2013

Jun 25 2014 Published by under Uncategorized

Few science policy topics have led to as much discussion as the NIH policy with regard to the number of amendments allowable for grant applications. This policy originated from the NIH Enhancing Peer Review draft report. Figure 8 in this report shows that the fraction of R01s that were funded in their original submission (A0) had dropped from more than 60% during the doubling years to less than 30% in 2007 with a concomitant increase in the fraction of A1 and, particularly, A2 applications that were funded. These data, some other analyses, and anecdotes from study sections, suggested that study sections were "queuing" applications, giving better scores to A2 applications since these was these application's "last chances."

The Enhancing Peer Review report proposed "consider(ing) all applications as new" so that there was no reason for study sections to treat applications differently based on their amendment status. Organizations including FASEB did not support this proposal. In order to help decrease the time between the initial submission and the time of funding for applications that would eventually be funded, the NIH countered by eliminating the option of A2 submissions.

From my perspective, this proposal would have been sensible in times of reasonable paylines. However, since this proposal was adopted, paylines have continued to fall (except during the ARRA years) to the point that the variability in scores means that two chances may not be adequate, even for outstanding proposals. I wrote a column about this for ASBMB today entitled "On deck chairs and lifeboats".

After resisting pressures from many in the scientific community for years, the NIH recently reversed course and now will allow an application that did not get funded in A1 form to be resubmitted as a new A0 application without restriction. This policy is a hybrid of the old policy and the initial proposal from the Enhancing Peer Review report. To my knowledge, NIH has not provided much guidance into the analysis that led to this reversal other than statements of concern about the impact of the "No A2" policy on early stage investigators.

As I thought about this policy, I realized that I had never seen data about the differences between new (Type 1) and competing renewal (Type 2) applications. Type 2 applications, in general, score better that Type 1 applications due, in part, to the fact that only applicants that have been relatively successful submit renewal applications. The differences can been seen in an NIGMS Feedback Loop post that revealed that (for the January 2010 Council round in FY2010) 50% of the Type 2 applications scored better than the 20th percentile compared with less than 30% of the Type 1 applications, even those from established investigators.

Below are plots of the number of NIH-wide R01 awards for Type 1 and Type 2 awards as a function of amendment status for FY2003-FY2013:

Type1-2 plot-2

The curves showing the mix of A0, A1, and >A1 (almost all A2) applications for Type 1 and Type 2 awards are remarkably similar, with comparable rises in the fraction of A2 awards from FY2003 to FY2009, followed by a drop to essentially zero due to the "no A2" policy. The results for Type 1 and Type 2 awards are compared directly in the plot below which shows the fraction of A0 awards out of total awards for each type.

A0-Total plot

In FY2003, the fraction of A0 awards among Type 1 grants was slightly larger than 0.5 whereas that for Type 2 was higher at approximately 0.6. As these fractions dropped, rose, and then dropped again, these two fractions are more equal near 0.45 in FY2013.

Closer examination of the first graph reveals another important point.  In FY2003, there were 4564 Type 1 awards and 2618 Type 2 awards. By FY2013, these had dropped to 3403 Type 1 awards and 1390 Type 2 awards. Note that these are new and competing awards, not all R01 awards (which would include non-competing (Type 5) awards). The drop in the number of Type 1 awards is 25% whereas the drop in the Type 2 awards is nearly twice as large (47%). One can speculate that this drop in the number of competing renewal awards since the ARRA bump may be a driving force in concerns expressed in recent years by established investigators about the direness of the funding situation.

What accounts for these decreases in the number of awards? Of course, the NIH appropriation has not grown substantially since FY2003 and has dropped substantially when corrected for inflation. However, the average R01 sizes (total costs in nominal dollars) increased by 24% for Type 1 awards and only 16% for Type 2 awards (although the distributions should be examined to understand these increases more fully, as discussed in an earlier post). The drop in award numbers reflects that fact that the total expenditures on R01s was $9.76 B in FY2003 and was $9.80 B in FY2013 whereas the overall NIH appropriation was $26.74 B in FY2003 and $29.13 B in FY2013, that is, the fraction of the NIH budget going to R01s has decreased over this period.

15 responses so far

Public Health Data-Project Tycho

Jun 19 2014 Published by under Uncategorized

It is remarkable to me that the US is dealing with localized outbreaks of infectious diseases such as measles and pertussis (whooping cough) given the availability and effectiveness of vaccines. Data that allow assessment of the impact of vaccines have recently become available through a project at the University of Pittsburgh Graduate School of Public Health-Project Tycho. The Project Tycho team obtained and digitized "all weekly surveillance reports of nationally notifiable diseases for U.S. cities and states published between 1888 and 2011." This data set, which includes almost 88 million cases, is freely available to researchers and the public and the first set of results have been published "Contagious Diseases in the United States from 1888 to the Present." A subscription is required for the New England Journal of Medicine article but the information and more is available at the Project Tycho website. Some of the data entry was accomplished through a partnership with Digital Divide Data.

Below are figures available from the Project Tycho website showing trends for measles and pertussis.

ProjectTycho_visuals-MeaslesProjectTycho-Pertussis

Project Tycho is a great resource for research, teaching and advocacy and represents an innovative example of converting information that exists in principle into data that can be analyzed and extended.

2 responses so far

Institutional Distribution of NSF Graduate Research Fellowships

Jun 10 2014 Published by under Uncategorized

As a follow-up to my previous post on the institutional distribution of NIH training funds, I have analyzed 1 year's worth (FY2013) of data from the NSF Graduate Research Fellowship program available through NSF FastLane. This data set lists those who were offered awards (but did not necessarily accept them if they got a "better offer" although this is likely rare) as well of their baccalaureate institution, their field of study, and their current institution.

The distributions of current institutions and baccalaureate institutions are shown below:

2013 NSF Fellow Institutions-2

As might has been anticipated, the distribution of current institutions is relatively narrow with 25 institutions accounting for 50% of the 1842 fellows but with a total of 271 institutions represented by at least one fellow. The distribution of baccalaureate institutions is somewhat broader with 58 institutions accounting for 50% of the fellows and 462 institutions contributing at least one fellow.

Of the 1842 fellows, 511 listed their field's of study as Life Sciences in some form. The distribution of the current institutions for the Life Science fellows correlates reasonably well with the overall pool with an correlation coefficient of 0.82. However, some life science-focused institutions such as UCSF represented at a higher level than in the fellow pool overall.

Further inspection revealed that the distribution of Life Science NSF fellows correlates reasonably well with the distribution of NIH F32 post-doctoral fellowship as shown below:

2013-NSF-F32 graph

The correlation coefficient here is 0.80.

These distributions have implications with regard to what would happen if more graduate students were supported by individual fellowships rather than research grants or institutional training grants. In addition, these distributions suggest those institutions that are most attractive to trainees who are likely to have the most options, depending on how one interprets the relationships between award probability and other factors.

(Updated with revised first figure)

7 responses so far

Training Funds (T32, F32, F31) Distributions by Institution

Jun 03 2014 Published by under Uncategorized

The recent commentary by Alberts, Kirschner, Tilghman, and Varmus on Rescuing US biomedical research from its systemic flaws (which discussed using training mechanisms in preference to research mechanisms to support graduate students, among other topics) as well as a recent post by DrugMonkey on individual postdoctoral NRSA fellowships (F32s) reactivated my interest in analyzing data on the distribution of training (as opposed to research) funds across institutions.

Below is a plot of of the total T32 institutional training funds awarded to 195 institutions that received at least some T32 funding for FY2013 versus the total NIH funding for FY2013 for these institutions:

T32 Institutional

These two parameters are highly correlated (correlation coefficient 0.94). This correlation implies that institutional training grant funds are relatively concentrated with approximately 50 institutions accounting for about half of the total annual institutional grant investment. Some of these awards support predoctoral trainees, some postdoctoral trainees, and some a mixture. The distribution of the sizes of training grants as a function of the number of years for which they have been funded is shown below:

T32 years size

Many of the largest T32 awards are Medical Scientist Training Program (MSTP, Combined MD/PhD) awards. These are highlighted on the plot for clarity.

Note that there is a trend with average award size increasing as a function of the increasing duration of the award as shown below:

T32 size by year

This trend is driven in part by the large size and long duration of many MSTP awards but the trend is present even excluding the MSTP awards.

In addition to institutional training awards, NIH also funds individual postdoctoral (F32) and predoctoral (F31) awards. The distribution of postdoctoral F32 awards across institutions is shown below:

F32 Institution Total-Names

While there is again a strong correlation with the total institutional funding, it is less strong than that for T32 awards (correlation coefficient = 0.79 versus 0.94). There are outliers from the general trend which have a larger number of F32 awards than one would expect from the general trend.

The distribution of predoctoral F31 individual training awards as a function of total institutional funding is shown below:
Institution Total F31-3

Again, the correlation is less pronounced than it is for T32s (correlation coefficient 0.79) but with outliers below rather than above the trend line. Some of these outliers are two hospitals associated with Harvard Medical School which is an outlier above the trend line. The others may at institutions where the fraction of students applying for F31 support may be low although this is just a conjecture.

Interestingly, the number of NIH Institutes and Centers participating in the parent F31 award announcement in current fiscal year jumped to 23 from 13 at the time of the previous announcement. This may reflect NIH embracing the recommendation that more Ph.D. students being supported by fellowships rather than research grants.

25 responses so far