More on Institutional Distributions of the Fraction of A0 Applications within the Pool of Funded R01 Grants

Dec 01 2014 Published by under Uncategorized

In my previous post, I presented (among other things) the distribution of the percentage of A0 applications within the pool of funded  R01 grants across institutions. This distribution showed that, among the top 100 institutions in terms of total NIH funding in fiscal year 2013, some institutions showed notably higher fractions of A0 applications among their funded R01 grants. Based on this and other analyses, this suggested that this supports the notion that these institutions enjoy higher success rates for their R01 applications. This surrogate may be useful since NIH does not release success rates by institution.

Before I present further analyses of these data, I need to note some technical issues. The data present in the NIH RePORTER data base can be challenging with regard to longitudinal analysis across institutions. While the data are relatively clean with regard the use of institutional names within a given year, they are less clean moving from one year to the next for many institutions. Institutional names can vary through the use of different abbreviations, punctuation, and so one. For example, the University of Michigan is listed as UNIVERSITY OF MICHIGAN AT ANN ARBOR for 2001-2006 and UNIVERSITY OF MICHIGAN for 2007-2014. This required construction of a data set with these ambiguities removed as much as possible. In addition, in my first analysis I determined the percentage of funded A0 applications among R01 grants for each year and then averaged these values. A more robust approach is to determine the total number of funded A0 applications for all years and then divide this value by the total number of R01 grants over the same period. I will use this method for all subsequent analyses.

In considering why some institutions show a higher percentage of A0 applications among their funded R01 grants, one important fact to keep in mind is that the success rates for competing renewal (Type 2) applications are generally higher (typically by approximately a factor of 2) than those for new (Type 1) applications. For example, in FY2013, the overall R01 success rate across NIH was 17%. This breaks down as a success rate of 14% for new (Type 1) applications and 31% for competing renewal (Type 2) applications. This observation suggests two possible explanations for the higher overall percentage of A0 applications among R01 grants at some institutions. First, the percentage of competing renewal (Type 2) grants among the funded R01s might be larger for some institutions than for others and this might account for the higher percentage of A0 applications overall in the grant pool. Second, the percentage of A0 applications among R01 grants might be higher even for new (Type 1) applications for these institutions. Of course, these explanations are not mutually exclusive. They are essentially independent and may or may not be correlated by virtue of institutional characteristics.

The distribution of funded Type 2 applications among funded R01 grants across institutions are shown below:

Fraction T2 histogram

The institutions with the higher fraction of Type 2 applications are:

1 MASSACHUSETTS INSTITUTE OF TECHNOLOGY
2 JOSLIN DIABETES CENTER
3 ROCKEFELLER UNIVERSITY
4 BRANDEIS UNIVERSITY
5 UNIVERSITY OF CALIFORNIA BERKELEY
6 CORNELL UNIVERSITY
7 INDIANA UNIVERSITY BLOOMINGTON
8 UNIVERSITY OF OREGON
9 PRINCETON UNIVERSITY
10 STATE UNIVERSITY NEW YORK STONY BROOK
11 JACKSON LABORATORY
12 HARVARD UNIVERSITY (MEDICAL SCHOOL)
13 UNIVERSITY OF CALIFORNIA SANTA CRUZ
14 HARVARD UNIVERSITY
15 SCRIPPS RESEARCH INSTITUTE
16 TUFTS UNIVERSITY BOSTON
17 NEW YORK UNIVERSITY
18 COLORADO STATE UNIVERSITY
19 UNIVERSITY OF WISCONSIN-MADISON
20 OREGON HEALTH & SCIENCE UNIVERSITY
21 UNIVERSITY OF COLORADO
22 PENNSYLVANIA STATE UNIVERSITY
23 UNIVERSITY OF ILLINOIS URBANA-CHAMPAIGN
24 CALIFORNIA INSTITUTE OF TECHNOLOGY
25 RUTGERS

At the other end of the distribution, institutions with relatively low fractions of Type 2 applications in their funded R01 pools include a number of major clinical centers including Seattle Children's Hospital (0.167), Cincinnati Children's Hospital (0.225), M.D. Anderson Cancer Center (0.268), Massachusetts General Hospital (0.285), and Brigham and Women's Hospital (0.297). This may reflect the lower rate of submission of Type 2 applications for clinical (as opposed to basic science) studies noted in an NIH publication from 2008.

The distribution of the percentage of A0 applications within the pool of R01 grants across institutions is shown below:

Type 1 Percent A0 histogram

The institutions with the higher percentages of A0 applications among their funded R01 grants are:

1 ROCKEFELLER UNIVERSITY
2 PRINCETON UNIVERSITY
3 CALIFORNIA INSTITUTE OF TECHNOLOGY
4 UNIVERSITY OF CALIFORNIA SANTA CRUZ
5 SALK INSTITUTE FOR BIOLOGICAL STUDIES
6 J. DAVID GLADSTONE INSTITUTES
7 MASSACHUSETTS INSTITUTE OF TECHNOLOGY
8 COLD SPRING HARBOR LABORATORY
9 HARVARD UNIVERSITY
10 CORNELL UNIVERSITY
11 UNIVERSITY OF CALIFORNIA BERKELEY
12 MOREHOUSE SCHOOL OF MEDICINE
13 FRED HUTCHINSON CAN RES CTR
14 DANA-FARBER CANCER INST
15 CHILDREN'S HOSPITAL CORPORATION
16 COLUMBIA UNIV NEW YORK MORNINGSIDE
17 UNIVERSITY OF COLORADO
18 INDIANA UNIVERSITY BLOOMINGTON
19 STANFORD UNIVERSITY
20 HARVARD UNIVERSITY (MEDICAL SCHOOL)
21 RUTGERS THE ST UNIV OF NJ NEW BRUNSWICK
22 OREGON HEALTH & SCIENCE UNIVERSITY
23 UNIVERSITY OF CHICAGO
24 CINCINNATI CHILDRENS HOSP MED CTR
25 BROWN UNIVERSITY

Note that 10 institutions appear on both lists. Indeed, these two parameters are substantially correlated as shown below:

T1-Percent A0 vs Frac T2 plot

The correlation coefficient between these two parameters is approximately 0.4. These data reveal that both factors contribute to the increased percentage of A0 applications among funded R01 grants at some institutions.

These analyses provide some data that may help sort out the factors that contribute to the higher percentage of funded A0 applications among R01 grants at some institutions, including those factors that contribute to the success of applications such as the reputations and seniority of the applicants, institutional biases in peer review, and other factors. I welcome comments about how these analyses might be extended.

6 responses so far

  • Dave says:

    ....factors that contribute to the success of applications such as the reputations and seniority of the applicants, institutional biases in peer review, and other factors

    Right. There is probably a strong argument to be made that these institutions just accumulate the most competitive applicants, even junior ones. I'm sure this is what you were getting at, but I'm just pointing it out. It would be very difficult, if not possible, to tease out this factor from other, more subjective ones such as peer review bias.

  • anonymous postdoc (shrewshrew) says:

    I find these analyses fascinating, in a large part because I am recalibrating my thinking on some of the institutions included on these lists.

    I find myself wondering if inclusion on this list is a clue that the institution provides an environment conducive to increased grant success overall, and could theoretically be incorporated into one's assessment of these institutions. (supportive environment: hard money salaries, low or no teaching load, collegial environment conducive to sharing and receiving feedback on grants from colleagues. Only the first thing is semi-knowable and quantitative, I would imagine.)

    Alternatively, this could true only for research funded by a few ICs per institution. I would imagine that most Hutchinson grants are funded by NCI and thus may not provide a supportive environment for NICHD-oriented grants, for example. But is Rockefeller excelling across the board?

    Or, this could be true only for a few rock stars pulling in many R01s over the 13 years of analysis in an otherwise average institution? Is it worth looking at these data normalized to the number of PIs these grants are awarded to? (Or driven by many rock stars, at institutions who run their faculty recruitment like they are the Yankees. Does Harvard do well because the institution is supportive, or because they recruit the already-successful?)

    • datahound says:

      I am glad you are enjoying the posts.

      I am working on some of the point you raise, particularly the distribution of R01s across PIs at each institution. Stayed tuned.

  • DJMH says:

    IU Bloomington??!?

    • datahound says:

      Out of 194 competing R01 grants awarded to faculty members at IU Bloomington from 2001-2014, 93 were competing renewals (Type 2). This speaks to how this statistic generally favors schools dominated by arts and sciences rather than medical schools, as can be seen by a number of the other institutions on this list.

  • Anonymous Consultant says:

    This analysis is interesting. Interpretation of some of these results might require some social science to get at the question of why these differences come about. Anonymous postdoc brought up the question of the supportive institutional environment. I think that's true in some cases (and in my direct experience with several of the institutions on the Type 2 list). Research development is a growing field, providing support for faculty beyond filling out forms and doing budgets. Some institutions provide people like me to help improve grant skills. Other institutions have completely 'sink-or-swim' environments. The latter are where I find a potentially different, or additional explanation given for the lower number of Type 2 R01s at medical schools. It might not just be the number of clinical studies.

    I hear attitudes from basic scientists such as, "Say whatever will get you funded, and then use that grant to do whatever you want, like that cool, risky thing that would never get past study section. Then don't try to renew it, because you won't be able to show progress on the funded aims. But who cares? Only put in new applications." In other places, that thought is anathema, but I've heard sentiments like this, independently, enough times that I'm curious about the sociology of the grant game, and the impact of local environment.

    It's also worth noting that my work with focus groups 5 years ago showed a reviewer desire for a Type 2s to have something significantly different from the previous award, and that "drilling down into your system" was considered less interesting, and less fundable, than it had been in the past. I can only imagine that trend has continued. Some applicants have responded by deciding that if the next application must take a substantively different direction, then it might as well be a new project. Why then waste one of the 12 precious pages on a progress report?

    I realize these are my qualitative results and anecdata, but I get to see more different environments than an individual faculty member. I'd be interested if anyone has looked into the sociology of different research/grant-chasing environments, and how it plays into choices of pursuing a longer research line withe renewed grants, or only proposing 'new' ideas.

Leave a Reply