In recent posts, I presented data regarding the trends in the percentage of A0 applications among funded R01 grants, showed how the percentage of A0 applications varied between the NIH-wide population and a select group (members of a section of the National Academy of Sciences), and examined the variations in R01 success rates between different NIH Institutes and Centers. Here, I bring these threads together to look at how the percentage of A0 applications among R01 grants varies across NIH Institutes and Centers and then extend the analysis to see how these percentages varies across different universities and other institutions.
Curves showing the percentages of A0 applications among funded R01 grants for six NIH institutes are shown below:
The top two curves are from NIGMS and NEI, two institutes that have had relatively high R01 success rates. The curves lie above the NIH-wide curve. The other curves are for four large institutes with lower success rates.
The average percentages of A0 applications among funded R01 grants over the period from FY2001 to FY2013 for these institutes are compared with the average R01 success rates over this period are shown below:
Institute | Average % A0s | Ave Success Rate |
NIGMS | 0.556 | 0.290 |
NEI | 0.533 | 0.325 |
NCI | 0.380 | 0.205 |
NIAID | 0.399 | 0.232 |
NHLBI | 0.457 | 0.235 |
NICHD | 0.389 | 0.175 |
These parameters are highly correlated with a correlation coefficient of 0.90. The slope of the line fit to these data is 0.54 ± 0.09. Thus, s a rule of thumb, the average success rate is approximately one half of the average percentage of A0 applications among funded R01 grants over this period.
These data are extended to all ICs with R01 success rates for individual fiscal years plotted versus he percentages of A0 applications among funded R01s for those years below:
As could be anticipated, there is more scatter in these data, given the factors that influence both parameters as well as the changes in policy over this period. Nonetheless, a correlation is observed with a correlation coefficient of approximately 0.4 and a best fit line with a similar slope.
The observation of this correlation suggested that the percentage of A0 applications among funded R01 grants could serve as a publicly available parameter to examine the experiences of average investigators at different extramural institutions. The average percentages of A0 applications among funded R01 grants for the period over the period from FY2001 to FY2014 were calculated for the 100 institutions that received the most NIH funding in FY2013. This distribution of these averages are shown below:
The distribution includes a number of institutions with relatively high percentages of A0 applications. These are labelled above. Examination reveals that these institutions consist largely of basic science-focused institutions with substantial hard money support for salaries including the prominent schools of Arts and Sciences or Engineering. This observation is supported by the fact that Princeton University (which is not in the top 100 institutions in terms of overall NIH funding) has a high percentage of A0 applications (62.2%).
Curves for individual institutions are compared below. These include the institution with higher percentage (Rockefeller), an institution with one of the highest percentage but not an outlier (UCSF), and application in the center of the distribution (University of Pennsylvania), and an institution at the bottom edge of the distribution (Wayne State University).
Based on the correlation with success rates, these data suggest that the average success rate for investigators at Rockefeller is approximately 50% higher than that for the NIH-wide average.
Two thoughts.
It would be interesting, but perhaps unknowable, to separately track real A0s and virtual A2s.
Am I remembering correctly that at least one IC was using a looser payline for A0s for a while?
Unfortunately, tracking virtual A2s compared with A0s is likely impossible, even inside NIH with access to the applications (although it is likely that NIH is trying to figure out how to do this).
I believe NHLBI did this for a while (shortly after the "Enhancing Peer Review" report).
Any chance you can provide a list showing which institutions correspond to the rest of the bars?
Here you go...
ORGANIZATION Percent of A0 apps
ROCKEFELLER UNIVERSITY 0.666
CALIFORNIA INSTITUTE OF TECHNOLOGY 0.652
MASSACHUSETTS INSTITUTE OF TECHNOLOGY 0.617
HARVARD UNIVERSITY 0.591
UNIVERSITY OF CALIFORNIA BERKELEY 0.565
CORNELL UNIVERSITY 0.560
HARVARD UNIVERSITY (MEDICAL SCHOOL) 0.545
FRED HUTCHINSON CAN RES CTR 0.528
DANA-FARBER CANCER INST 0.524
RUTGERS THE ST UNIV OF NJ NEW BRUNSWICK 0.524
STANFORD UNIVERSITY 0.520
NORTHWESTERN UNIVERSITY 0.503
CHILDREN'S HOSPITAL CORPORATION 0.501
UNIVERSITY OF CALIFORNIA SAN FRANCISCO 0.488
JACKSON LABORATORY 0.487
UNIVERSITY OF WASHINGTON 0.484
YALE UNIVERSITY 0.484
UNIV OF NORTH CAROLINA CHAPEL HILL 0.476
DUKE UNIVERSITY 0.476
UT SOUTHWESTERN MEDICAL CENTER 0.475
WEILL MEDICAL COLL OF CORNELL UNIV 0.474
BRIGHAM AND WOMEN'S HOSPITAL 0.473
MAYO CLINIC ROCHESTER 0.473
SLOAN-KETTERING INST CAN RES 0.473
UNIVERSITY OF UTAH 0.472
CINCINNATI CHILDRENS HOSP MED CTR 0.472
UNIVERSITY OF ILLINOIS URBANA-CHAMPAIGN 0.472
HARVARD UNIVERSITY (SCH OF PUBLIC HLTH) 0.470
CLEVELAND CLINIC LERNER COM-CWRU 0.470
COLUMBIA UNIVERSITY HEALTH SCIENCES 0.467
BAYLOR COLLEGE OF MEDICINE 0.465
BROWN UNIVERSITY 0.465
SCRIPPS RESEARCH INSTITUTE 0.463
OREGON HEALTH & SCIENCE UNIVERSITY 0.463
RESEARCH TRIANGLE INSTITUTE 0.463
UNIVERSITY OF CALIFORNIA IRVINE 0.462
STATE UNIVERSITY NEW YORK STONY BROOK 0.460
UNIVERSITY OF CALIFORNIA SAN DIEGO 0.459
UNIVERSITY OF CHICAGO 0.457
JOHNS HOPKINS UNIVERSITY 0.455
WASHINGTON UNIVERSITY 0.452
UNIVERSITY OF VIRGINIA 0.452
ST. JUDE CHILDREN'S RESEARCH HOSPITAL 0.451
UNIV OF MASSACHUSETTS MED SCH WORCESTER 0.450
MASSACHUSETTS GENERAL HOSP 0.446
UNIVERSITY OF WISCONSIN-MADISON 0.444
VANDERBILT UNIVERSITY MED CTR 0.443
MICHIGAN STATE UNIVERSITY 0.443
UNIVERSITY OF PENNSYLVANIA 0.442
UNIVERSITY OF CALIFORNIA LOS ANGELES 0.441
SANFORD-BURNHAM MEDICAL RESEARCH INSTIT 0.439
UNIVERSITY OF SOUTHERN CALIFORNIA 0.438
DARTMOUTH COLLEGE 0.437
UNIVERSITY OF MICHIGAN 0.436
MOUNT SINAI SCHOOL OF MEDICINE 0.434
UNIVERSITY OF ROCHESTER 0.434
UNIVERSITY OF TEXAS, AUSTIN 0.431
UNIVERSITY OF CALIFORNIA DAVIS 0.429
CASE WESTERN RESERVE UNIVERSITY 0.429
UNIVERSITY OF CINCINNATI 0.425
ALBERT EINSTEIN COLLEGE OF MEDICINE 0.424
WAKE FOREST UNIVERSITY HEALTH SCIENCES 0.423
PENNSYLVANIA STATE UNIVERSITY 0.420
PENNSYLVANIA STATE UNIVERSITY-UNIV PARK 0.420
NORTHWESTERN UNIVERSITY AT CHICAGO 0.419
UNIVERSITY OF VERMONT & ST AGRIC COLLEGE 0.418
UNIVERSITY OF PITTSBURGH AT PITTSBURGH 0.416
UNIVERSITY OF IOWA 0.416
TULANE UNIVERSITY OF LOUISIANA 0.415
UNIVERSITY OF ARIZONA 0.412
EMORY UNIVERSITY 0.411
UNIVERSITY OF FLORIDA 0.411
TUFTS UNIVERSITY BOSTON 0.411
UNIVERSITY OF MINNESOTA 0.408
UNIVERSITY OF ILLINOIS 0.404
UNIVERSITY OF TEXAS MEDICAL BR GALVESTON 0.402
UNIVERSITY OF MARYLAND BALTIMORE 0.397
NEW YORK UNIVERSITY SCHOOL OF MEDICINE 0.395
UNIVERSITY OF TEXAS HLTH SCI CTR HOUSTON 0.395
UNIVERSITY OF MIAMI SCHOOL OF MEDICINE 0.394
BOSTON UNIVERSITY MEDICAL CAMPUS 0.393
UNIVERSITY OF CONNECTICUT SCH OF MED/DNT 0.393
UNIVERSITY OF ALABAMA AT BIRMINGHAM 0.388
INDIANA UNIV-PURDUE UNIV AT INDIANAPOLIS 0.388
BETH ISRAEL DEACONESS MEDICAL CENTER 0.386
MEDICAL COLLEGE OF WISCONSIN 0.385
UT MD ANDERSON CANCER CTR 0.383
UNIVERSITY OF COLORADO DENVER 0.378
CHILDREN'S HOSP OF PHILADELPHIA 0.376
STATE UNIVERSITY OF NEW YORK AT BUFFALO 0.367
UNIVERSITY OF KANSAS 0.364
UNIV OF TX HSC, SA 0.360
TEMPLE UNIV OF THE COMMONWEALTH 0.360
UNIVERSITY OF KENTUCKY 0.359
UNIVERSITY OF NEBRASKA MEDICAL CENTER 0.346
VIRGINIA COMMONWEALTH UNIVERSITY 0.343
OHIO STATE UNIVERSITY 0.330
MEDICAL UNIVERSITY OF SOUTH CAROLINA 0.329
WAYNE STATE UNIVERSITY 0.320
I think you are identifying an aspect of reputational bias here that has consequences far beyond the appearance.
I agree that these data are quite interesting and consequential and the role on reputational bias is potentially important. Indeed, the relative role of bias versus application characteristics is a key issue. When I was doing the analysis, I was struck by the frequency of schools of arts and sciences and engineering in the top-ranked institutions. One hypothesis (which can be tested) is that many investigators at such institutions function with single R01 grants (since their salaries are covered (at least 75%) by their institutions and students can be partially supported by TA positions). With single R01s and substantial other resources, the fraction of Type 2 (competing renewal) applications would be expected to be higher than for other institutions with different "business models" with the associated higher success rates.
Thanks for the additional info. I have been staring at this for a while, trying to make sense of the trends. Even looking just at the subset of institutions with large clinical faculties, I am surprised by the lack of correlation between overall NIH funding and A0 success rate. Are some school just throwing more darts at the dartboard, succeeding because of the sheer number of grants submitted (and resubmitted) rather than average quality?
There are several factors to consider. First, the variations in the percentage of A0 applications in the funded R01 pool are relatively small across most of the distribution so that considerable scatter might be expected. For example, if you look at the curves for UCSF (near the top of the distribution with %A0 = 0.488 and University of Pennsylvania in the middle with %A0 = 0.442, they are not very different with the largest differences during the doubling period. Second, other factors contribute to overall NIH funding (large centers or contracts, etc.) so that some institutions have substantial funding even with relatively modest R01 funding portfolios. For example, George Washington University is often in the top 100 in overall NIH funding due to 2 large (>$10M awards) but has only a small number of R01s with a %A0 0f < 0.25. Finally, I think investigators at many institutions are "throwing darts" given the randomness of peer review with funding levels where they are. I certainly know of investigators and organized units within institutions where submission of a proposal for essentially every NIH grant cycle is the expectation.
You seem to be entering into the common belief that submitting apps every round means lower quality (and therefore success rates) with this comment datahound. Any evidence for this belief that you know of or are we in truthy-land again?
Unfortunately, I do not have real data, just personal experience and anecdotes. On the personal experience front, I know of investigators who apply relatively infrequently, but work extremely hard on their proposals and almost always receive outstanding scores (top 5 %ile). I also know of investigators who are pushed to submit essentially every cycle and rarely receive fundable scores. I did not mean to imply that application quality correlates strongly with application rate.
It would be interesting to have data from within NIH but I do not know if anyone with access to such data has looked at this issue.
The people who "always get 5%ile scores" of my acquaintance have quite a number of attributes other than "work really hard" on their applications. Having reviewed many proposals across a wide swath of investigator types in my limited experience I am highly confident that lots of less than perfect grants get low percentile scores on then basis of PI attributes other than their grant crafting. So again, it is not well justified to suggest, as you do here, that exquisite grant preparation makes the difference. It is a traditional theme, true, but it is wrong. Not only that but it is *perniciousky* and *offensively* wrong from the perspective of those who are limited by their "other attributes" but not their effort or writing chops.
That is why I ask if you have data. Or anything other than the usual bias associated with thinking the successful within the system are uniquely deserving in an objective analysis. Because my experience reviewing the grants of the wonderfully successful and the struggling newb says this not true.
I worded my response poorly. I did not mean to imply that "working really hard" on their applications was the source of their success. The folks that I was referring to are established and highly productive scientists who also work hard in crafting their proposals. I have also seen established and productive scientists who do not work as hard to craft their proposals, some of whom are doing okay in the present environment and some of whom are struggling.
In my opinion, success within the system is a mix of "being deserving" by being a productive and creative scientist, biases that work in the favor of some investigators due to institution, pedigree, etc., and (particularly these days) random factors that can be the difference between a very good score and a fundable score.
Point being, if PIs are differentially dis/advantaged by reputational bias, fewer or more apps correlating with success rate is not a causal relationship. Both are driven by the bias. OR, if you credit that populations of PIs are distinguished by objective grant quality traits that generalize across all of their apps there is still a third variable driver of the application rate / success relationship. Neither of those scenarios tells an individual that changing his or her submission rate will change the success rate.
are there any stats on the number of ESI apps funded in a given year per institute?
Unfortunately, I do not believe that such data are available (but they should be).
[…] my previous post, I presented (among other things) the distribution of the percentage of A0 applications within the […]