Archive for: October, 2016

Maximizing Investigators' Research Awards for New and Early Stage Investigators: Gender and Race/Ethnicity Issues

Oct 04 2016 Published by under Uncategorized

In a recent post, I noted that I had submitted a FOIA request for data regarding the NIGMS Maximizing Investigators' Research Awards (MIRA) for New and Early Stage Investigators. My major goal was to get some information about the applications that were administratively rejected. However, I also requested information about the demographics (gender and race/ethnicity) of the applicant and awardee pools. Here, I present an analysis of these data.

First, I need to provide some context. First, the data that I obtained are not quite complete since they were obtained slightly before the end of fiscal year. I do not believe that this will affect any of my conclusions. Second, there is a recent post on the NIGMS Feedback Loop that covers some of these same issues. This post states:

"In addition to ensuring that we are funding the highest quality science across areas associated with NIGMS’ mission, a major goal is to support a broad and diverse portfolio of research topics and investigators. One step in this effort is to make sure that existing skews in the system are not exacerbated during the MIRA selection process. To assess this, we compared the gender, race/ethnicity and age of those MIRA applicants who received an award with those of the applicants who did not receive an award, as well as with New and Early Stage Investigators who received competitive R01 awards in Fiscal Year (FY) 2015.

We did not observe any significant differences in the gender or race/ethnicity distributions of the MIRA grantees as compared to the MIRA applicants who did not receive an award. Both groups were roughly 25% female and included ≤10% of underrepresented racial/ethnic groups. These proportions were also not significantly different from those of the new and early stage R01 grantees. Thus although the MIRA selection process did not yet enhance these aspects of the diversity of the awardee pool relative to the other groups of grantees, it also did not exacerbate the existing skewed distribution."

Let's now turn to the data. With regard to gender, the results are as follows:

Male:  Reviewed applications, not funded: 155, Awards: 63; Total applications reviewed, 218; Administratively rejected applications: 58

Female:  Reviewed applications, not funded: 63, Awards: 19; Total applications reviewed, 82; Administratively rejected applications: 22

Unknown:  Reviewed applications, not funded: 12, Awards: 8; Total applications reviewed, 20; Administratively rejected applications: 3

I will not discuss the "Unknown" category further (gender and race/ethnicity information is provided voluntarily).

From these numbers, we can calculate the following parameters:  Success rate = Awards/Reviewed applications; Probability of administrative rejection = Administratively reject applications/Total applications; All application success rate = Awards/Total applications

Male: Success rate = 28.9%, Probability of administrative rejection = 21.0%, All applications success rate = 22.8%

Female: Success rate = 23.2%, Probability of administrative rejection = 21.1%, All applications success rate = 18.3%

Although these results are not statistically significant, the first two parameters trend in favor of males over females. If these percentages persisted in larger sample sizes, they could become significant.

We now turn to information about the self-identified races of applicants. The categories are: White, Asian, African American, Native American, Multiracial, Unknown, and Withheld. Since the NIH FOIA policy is not to release information for cells that contain 10 or fewer individuals, I did not obtain precise data for African American, Native American, Multiracial, Unknown, or Withheld individuals. Thus, I will present the data as White, Asian, and Other (corresponding to African American, Native American, Multiracial, Unknown, or Withheld). Note that the numbers for the "Other" category can be deduced since the overall total for each category is given.

The data are as follows:

White:  Reviewed applications, not funded: 118, Awards: 63; Total applications reviewed, 181; Administratively rejected applications: 33

Asian:  Reviewed applications, not funded: 71, Awards: 16; Total applications reviewed, 87; Administratively rejected applications: 34

Other:  Reviewed applications, not funded: 41, Awards: 11; Total applications reviewed, 52; Administratively rejected applications: 16

I compiling these totals, I noticed that there are no rows for African American, Native American, or Multiracial in the Awards (Applications Funded) category whereas there are for the other categories. This suggests that there were no awardees who identified as African American, Native American, or Multiracial.

The parameters deduced from these categories are as follows:

White: Success rate = 34.8%, Probability of administrative rejection = 15.4%, All applications success rate = 29.4%

Asian: Success rate = 18.4%, Probability of administrative rejection = 28.1%, All applications success rate = 13.2%

Other: Success rate = 21.1%, Probability of administrative rejection = 23.5%, All applications success rate = 16.2%

The differences between the White and Asian results are striking. The difference between the success rates (34.8% versus. 18.4%) is statistically significant with a p value of 0.006. The difference between the the all applications success rate (29.4% versus 13.2%) is also statistically significant with a p value of 0.0008. Finally, the difference between the probabilities of administrative rejection (15.4% versus 28.1%) is statistically significant with p = 0.007.

The differences between the White and Other category results are less pronounced but also favored White applicants.  The difference between the success rates (34.8% versus. 21.1%) is not statistically significant although it is close with a p value of 0.066. The difference between the the all applications success rate (29.4% versus 16.2%) is statistically significant with a p value of 0.004. Finally, the difference between the probabilities of administrative rejection (15.4% versus 23.5%) not statistically significant with p = 0.14 although the trend favors White applicants.

I, personally, find it hard to difficult to reconcile these data with the statements in the NIGMS Feedback Loop post. Again, this states:

We did not observe any significant differences in the gender or race/ethnicity distributions of the MIRA grantees as compared to the MIRA applicants who did not receive an award. Both groups were roughly 25% female and included ≤10% of underrepresented racial/ethnic groups.

There are statistically significant differences in the race distributions of the MIRA grantees as compared with the MIRA applications with more White compared to Asian individuals among the grantees compared to those who did not receive an award. These differences are sufficiently large that they are unlikely to be dramatically affected by the applicants with unknown or withheld race.

The statement that "both groups were roughly 25% female" is true but, from the available data, the grantee pool was 21.1% female and the pool of those not receiving an award was 27.4% female. These numbers are somewhat uncertain because of the number of applicants with unknown gender. However, there appears to be a trend disfavoring applications from females.

The data obtained through the FOIA do not allow a critical analysis of the comments about underrepresented racial/ethnic groups. However, it appears that there were no awards to applicants who identified as African American, Native American, or Multiracial, based on the missing rows in the spreadsheet that I obtained through FOIA.

Every parameter that I examined favors white or males over other groups. I find it quite discouraging that NIGMS chose to present these outcomes in a somewhat distorted and superficial manner rather than more fully presenting the data and engaging the community on trying to understand the bases for these apparent biases.

Updated:  I corrected several typographical errors.

19 responses so far