Maximizing Investigators' Research Awards for New and Early Stage Investigators: Gender and Race/Ethnicity Issues

Oct 04 2016 Published by under Uncategorized

In a recent post, I noted that I had submitted a FOIA request for data regarding the NIGMS Maximizing Investigators' Research Awards (MIRA) for New and Early Stage Investigators. My major goal was to get some information about the applications that were administratively rejected. However, I also requested information about the demographics (gender and race/ethnicity) of the applicant and awardee pools. Here, I present an analysis of these data.

First, I need to provide some context. First, the data that I obtained are not quite complete since they were obtained slightly before the end of fiscal year. I do not believe that this will affect any of my conclusions. Second, there is a recent post on the NIGMS Feedback Loop that covers some of these same issues. This post states:

"In addition to ensuring that we are funding the highest quality science across areas associated with NIGMS’ mission, a major goal is to support a broad and diverse portfolio of research topics and investigators. One step in this effort is to make sure that existing skews in the system are not exacerbated during the MIRA selection process. To assess this, we compared the gender, race/ethnicity and age of those MIRA applicants who received an award with those of the applicants who did not receive an award, as well as with New and Early Stage Investigators who received competitive R01 awards in Fiscal Year (FY) 2015.

We did not observe any significant differences in the gender or race/ethnicity distributions of the MIRA grantees as compared to the MIRA applicants who did not receive an award. Both groups were roughly 25% female and included ≤10% of underrepresented racial/ethnic groups. These proportions were also not significantly different from those of the new and early stage R01 grantees. Thus although the MIRA selection process did not yet enhance these aspects of the diversity of the awardee pool relative to the other groups of grantees, it also did not exacerbate the existing skewed distribution."

Let's now turn to the data. With regard to gender, the results are as follows:

Male:  Reviewed applications, not funded: 155, Awards: 63; Total applications reviewed, 218; Administratively rejected applications: 58

Female:  Reviewed applications, not funded: 63, Awards: 19; Total applications reviewed, 82; Administratively rejected applications: 22

Unknown:  Reviewed applications, not funded: 12, Awards: 8; Total applications reviewed, 20; Administratively rejected applications: 3

I will not discuss the "Unknown" category further (gender and race/ethnicity information is provided voluntarily).

From these numbers, we can calculate the following parameters:  Success rate = Awards/Reviewed applications; Probability of administrative rejection = Administratively reject applications/Total applications; All application success rate = Awards/Total applications

Male: Success rate = 28.9%, Probability of administrative rejection = 21.0%, All applications success rate = 22.8%

Female: Success rate = 23.2%, Probability of administrative rejection = 21.1%, All applications success rate = 18.3%

Although these results are not statistically significant, the first two parameters trend in favor of males over females. If these percentages persisted in larger sample sizes, they could become significant.

We now turn to information about the self-identified races of applicants. The categories are: White, Asian, African American, Native American, Multiracial, Unknown, and Withheld. Since the NIH FOIA policy is not to release information for cells that contain 10 or fewer individuals, I did not obtain precise data for African American, Native American, Multiracial, Unknown, or Withheld individuals. Thus, I will present the data as White, Asian, and Other (corresponding to African American, Native American, Multiracial, Unknown, or Withheld). Note that the numbers for the "Other" category can be deduced since the overall total for each category is given.

The data are as follows:

White:  Reviewed applications, not funded: 118, Awards: 63; Total applications reviewed, 181; Administratively rejected applications: 33

Asian:  Reviewed applications, not funded: 71, Awards: 16; Total applications reviewed, 87; Administratively rejected applications: 34

Other:  Reviewed applications, not funded: 41, Awards: 11; Total applications reviewed, 52; Administratively rejected applications: 16

I compiling these totals, I noticed that there are no rows for African American, Native American, or Multiracial in the Awards (Applications Funded) category whereas there are for the other categories. This suggests that there were no awardees who identified as African American, Native American, or Multiracial.

The parameters deduced from these categories are as follows:

White: Success rate = 34.8%, Probability of administrative rejection = 15.4%, All applications success rate = 29.4%

Asian: Success rate = 18.4%, Probability of administrative rejection = 28.1%, All applications success rate = 13.2%

Other: Success rate = 21.1%, Probability of administrative rejection = 23.5%, All applications success rate = 16.2%

The differences between the White and Asian results are striking. The difference between the success rates (34.8% versus. 18.4%) is statistically significant with a p value of 0.006. The difference between the the all applications success rate (29.4% versus 13.2%) is also statistically significant with a p value of 0.0008. Finally, the difference between the probabilities of administrative rejection (15.4% versus 28.1%) is statistically significant with p = 0.007.

The differences between the White and Other category results are less pronounced but also favored White applicants.  The difference between the success rates (34.8% versus. 21.1%) is not statistically significant although it is close with a p value of 0.066. The difference between the the all applications success rate (29.4% versus 16.2%) is statistically significant with a p value of 0.004. Finally, the difference between the probabilities of administrative rejection (15.4% versus 23.5%) not statistically significant with p = 0.14 although the trend favors White applicants.

I, personally, find it hard to difficult to reconcile these data with the statements in the NIGMS Feedback Loop post. Again, this states:

We did not observe any significant differences in the gender or race/ethnicity distributions of the MIRA grantees as compared to the MIRA applicants who did not receive an award. Both groups were roughly 25% female and included ≤10% of underrepresented racial/ethnic groups.

There are statistically significant differences in the race distributions of the MIRA grantees as compared with the MIRA applications with more White compared to Asian individuals among the grantees compared to those who did not receive an award. These differences are sufficiently large that they are unlikely to be dramatically affected by the applicants with unknown or withheld race.

The statement that "both groups were roughly 25% female" is true but, from the available data, the grantee pool was 21.1% female and the pool of those not receiving an award was 27.4% female. These numbers are somewhat uncertain because of the number of applicants with unknown gender. However, there appears to be a trend disfavoring applications from females.

The data obtained through the FOIA do not allow a critical analysis of the comments about underrepresented racial/ethnic groups. However, it appears that there were no awards to applicants who identified as African American, Native American, or Multiracial, based on the missing rows in the spreadsheet that I obtained through FOIA.

Every parameter that I examined favors white or males over other groups. I find it quite discouraging that NIGMS chose to present these outcomes in a somewhat distorted and superficial manner rather than more fully presenting the data and engaging the community on trying to understand the bases for these apparent biases.

Updated:  I corrected several typographical errors.

19 responses so far

  • Kate says:

    Thanks so much for doing this... And thus continues the "death by a thousand cuts" for women and racial minorities in academia.

  • John says:

    Could the differences be a result of grantsmanship? We are talking about solely new investigators with limited grant writing experience. At the risk of sounding racist, could it be that the "Asian" category might be at a disadvantage at clearly communicating science in English? Just a thought but something that should be looked into and advised.

    • datahound says:

      This is certainly one reasonable hypothesis. The numerical conclusion is that Asian applicants did not fare as well. Language issues, grantsmanship, and other possibilities may account for this observation. My point is that the difference needs to be acknowledged, investigated, and understood.

      • Pinko Punko says:

        I am definitely glad to see these analyses. Thanks, datahound. I saw that you tweeted the the NIH did better on the recent Director's awards, but there seems to be such a limitation on institutional diversity there. I wonder if these same issues also apparent in MIRA awards.

        For the MIRA awards, is there any way to look at the distribution of applicant numbers for regular R01 versus those that applied for baby MIRA? I suspect that there also may be fewer women applying than might be expected- that wouldn't be very good either.

    • Mary says:

      Um, no. Do you know what a MIRA is? To be eligible to apply, one has to have had at least two active R01's from NIGMS. This excludes new investigators.

      It is quite possible that grantsmanship played a role-it always does. But this was not only not "solely new investigators", but actually excluded new investigators.

      • datahound says:

        Mary: This version of MIRA was only for new and early stage investigators with no other substantial NIH support. The first pilot for MIRA was for those with 2 or more active NIGMS R01s.

  • Jeremy, thanks for this analysis. It's disappointing that we can't take the NIGMS at its word. Acknowledging the issue is always step one.

  • HRC_academic says:

    Thanks so much, Jeremy Berg, for conducting this insightful analysis and adding evidence to my hunch of potential implicit bias against Asians and other minorities as of late in the NIH grant game, now that grant competition is so fierce. I realize that I'm an anecdote as one of those Asian Americans trying but not winning one of these MIRA, and not because I'm deficient in grantsmanship that a "John" proposes as a limitation for the Asian category.

    Gee, John, way to go in finding a "less-racial" justification for the clear bias in the racial breakdown of these awardees! It's not "race", it's "grantsmanship"! Of course! John, not only did you risk, but you succeeded marvelously in sounding racist. Bravo!

    According to John, an Asian who gets a coveted but extremely rare academic assistant professor position applying for a MIRA must likely be "fresh-of-the-boat". Can't clearly communicate science in English?! Jeremy, how can you possibly entertain John by saying "This is certainly one reasonable hypothesis." Do you really think this is reasonable, considering what it takes to get faculty job these days?

    Look, I can churn the grants out as well as my non-Asian colleagues, and I have as respectable a set of credentials, glamour-pubs, and pedigree to go toe-to-toe to the rest of the pool, as I would expect many of my other Asian contemporaries. I bet we stand as qualified a grant writer as John here. But ultimately, what is the make-up of these Special Emphasis panels, if not inherently skewed for the prevailing class, where Implicit Bias is a real thing?

    • datahound says:

      I meant that, in the absence of other information, John's suggestion was a plausible hypothesis. From an NIH perspective, it is testable by looking at the reviews and applications from different groups of investigators. From outside NIH, such an analysis would not be possible. Implicit bias is also a plausible hypothesis that could also be tested to some degree as well. I will try to find the rosters for the review panels.

      • KarenF says:

        Another plausible hypothesis for the discrimination against women could be a lower display of confidence in the writing, which could be considered grantsmanship. As in day-to-day interactions, women are not socialized to be assertive and direct. Perhaps more tentativeness and passive voice leads to less confidence in their written ideas and subsequently their scores. Just a thought....

      • John says:

        Datahound - thanks for taking my response at face value and understanding what I proposed.

        HRC_Academic, have you reviewed proposals for the NIH? Have you reviewed proposals for colleagues? I have done both and have noted, on average, that foreign PIs have difficulty clearly conveying their message. Does this mean that EVERY foreign PI is incapable? Obviously not! But I would suspect that the clarity argument carries more water than implicit bias. Especially at the earlier stages of one's career!

        As datahound stated, this could easily be addressed on the inside. However, one way we could possibly tell if "grantsmanship" played a role from the outside is to look at the breakdown of proposals that were triaged. Not sure if this information is available but I would expect that implicit bias would play a larger role in the discussion of proposals and not the initial scoring.
        Therefore, if a larger number of Asian applications were triaged I think you would have a stronger case for the argument I present.

        • HRC_academic says:

          John - your world view is quite patronizing, despite your attempt to convey a veiled sense of "objectivity".

          Yes, I have reviewed proposals for NIH, NSF, and international agencies like ERC, Human Frontiers. I have also reviewed lots of proposals from colleagues and trainees. I have grants from the NIH, including an R01, and been around the block so to speak. But as an Asian, I am also hypersensitive to the implicit biases we face, including being called the "model minority". Your responses are clear examples of this bias that remains to this day.

          I will grant you that foreign TRAINEES are hampered by English grammar deficiencies. I will also concur that foreign early career scientists may not fully appreciate the "cultural undertones" that frame the seasoned American study section reviewers' mindset. But do you really think white early career scientists are more clued into this than non-white scientists applying for this one very narrow-eligibility MIRA competition - restricted just to EC/NI applicants?

          Jeremy Berg's analysis shows a whopping ~50% disparity in success rates for this special MIRA competition between whites and asians. That is a huge disparity, one that I cannot accept is mainly driven by deficiencies in grammar and grantsmanship from that "big" of a pool of "foreign" asians who get so far as an academic appointment but still cannot write coherent english in their grants. Come on!

          Go see the comments on DrugMonkey's blog post where an NIA panel showed that non-white applicants had *fewer* grammatical errors in their grants, and that "grantsmanship" deficiencies have been repeatedly raised to deflect the Ginther report's findings of racial bias in the NIH grant system.

          I hope you will eventually be enlightened by this discussion.

          • drugmonkey says:

            What factors do you think would drive racial bias in rejecting applications for not meeting the scope/intent of baby MIRA?

  • jmz4 says:

    Can you clarify what "administrative rejection" means?

  • datahound says:

    Administrative rejection means that the application was returned without review. According to a response to my question on the NIGMS Feedback Loop, the possible reasons are:

    The proposed research was outside the NIGMS mission.
    The applicant was not a New or Early Stage Investigator.
    The applicant received other R01-equivelent NIH support after submission of the MIRA application that changed their eligibility.
    The applicant received support from another funder (e.g., NSF) and that funder deemed the MIRA application to be overlapping with their grant, requiring withdrawal of the MIRA application.

    I have asked for the relative proportions of these reasons but no response has been posted as of yet.

  • Jeffrey Gray says:

    Thank you for doing and sharing the analysis Jeremy. This is disappointing. Have you done the same analysis on the senior-level MIRAs?

Leave a Reply