Cross-posted at Social (In)Queery
When nationally representative surveys first started appearing that addressed issues of gender and sexual identities and practices, most people had the same question. It was some derivation of, “How many gay/lesbian/bisexual/trans*/etc. people are there?” And, from a sociological perspective, it’s a question often associated with a fundamental misunderstanding of how complicated a question like that actually is.
In 1994, Edward Laumann, John Gagnon, Robert Michael and Stuart Michaels published an incredible book on one of the first nationally representative surveys of the American population concerning issues of sexuality, sexual behavior, and sexual orientation–The Social Organization of Sexuality. In their chapter, “Homosexuality,” they begin a brief section of the book on the “dimensions of sexuality” that encompasses some of my favorite findings out of the study. In it, they write,
To quantify or count something requires unambiguous definition of the phenomenon in question. And we lack this in speaking of homosexuality. When people ask how many gays there are, they assume that everyone knows exactly what is meant. (here: 290)
Measuring the size of the LGBT population is difficult for more than a few reasons. I spend a week on the considerations of measuring sexuality in my Sociology of Sexualities course. During that week, we deal primarily with discussing the size of the LGBT population in the U.S., how this is measured, and both how and why measurements are likely skewed. Ritch Savin-Williams has a wonderful short analysis of how challenging it is to estimate the size of the LGB population (here) and Gary Gates’ estimates of the LGBT population are some of the most widely accepted.
In helping students understand exactly why sexuality might be a bit more slippery than they might have initially assumed, I show a set of Venn diagrams from Laumann, et al’s book that details the proportions of their sample who “report[ed] any adult same-gender sexuality” (see left). Of their sample of 0ver 3,000 individuals, around 9% of women and 10% of men reported participating in some dimension of same-gender sexuality. The authors distinguish three separate dimensions of sexuality: acts, identities, and desires. What was so wonderful about their findings (and this visualization of them) was not that so many people identified with same-gender desires, but that so few of the participants who reported some dimension of same-gender sexuality reported identifying with all three. That is, of the 150 women and 143 men in their sample who reported either identifying as gay, engaging in same-gender intimacy, and/or experiencing same-gender sexual desires, only 34 men and 23 women reported all three. (Follow the footnote for a classroom activity suggestion.)* What this shows is that the dimensions of sexuality are not only theoretically distinct. They’re practically distinct as well. People don’t “match up” in all the ways we’d expect because… well because sexuality is simply more complicated than that.
According to Laumann, et al., there are three dimensions of sexuality: sexual acts, sexual identities, and sexual desires. And surveys have asked questions dealing with each of these issues producing a wild array of different answers to that dubiously simple-sounding question, “How many are there?” Perhaps not surprisingly, some of the highest estimates are associated with same-sex desires or attractions. Surveys have shown that up to 11% of the population identify with these. Participation in same-sex intimacy is a close second, producing estimates of up to almost 9%. When asked directly whether they identify themselves as “lesbian, gay, or bisexual,” however, the highest estimate on existing surveys is a little less than 6% of the population and bisexual-identifying individuals make up a larger portion of this than lesbians or gay men. Yet, estimates for each of the dimensions vary enormously across surveys (see Gates’ two graphs below).
Ok. So, thus far we’ve established that measuring the size of any sexual population is probably more difficult than we might initially assume (unless we get really specific about who and what “counts”).
Here’s the next twist: existing surveys directly ask respondents to respond to questions about these (potentially) sensitive issues based on the premise that because surveys are both private and anonymous, people will give honest accounts of their acts, identities, and behaviors. A group of economists test this proposition relative to sexuality in an interesting new study that suggests that we might be significantly under-reporting the actual size of the LGBT population because of this method.
Katherine Coffman, Lucas Coffman, and Keith Marzilli Erikson recently released a new study as a part of the National Bureau of Economic Research (NBER) Working Paper Series, “The Size of the LGBT Population and the Magnitude of Anti-Gay Sentiment are Substantially Underestimated.” It’s a provocative claim, and the research design is really interesting. They slightly modify a survey method (item count method, or ICT) shown to reduce social desirability bias. If you’re unfamiliar, social desirability bias is the affinity of respondents to answer questions in a way that they think will be perceived favorably by others. It can take the form of over-reporting behaviors understood as “good,” or under-reporting behaviors understood as “bad.”
Just by way of example, there’s an interesting body of sociological research that went out to see whether people were answering the “How often do you attend church?” question accurately—or, whether they were over-reporting church attendance. Turns out, this is a question about which social scientists have shown a considerably amount of lying among respondents (here). It’s a significant finding for more than a few reasons, one of which is that church attendance is still a popular measure of religiosity. This doesn’t necessarily mean these people aren’t actually as religious as they say; but it might mean that what we’re actually measuring by religiosity (at least when we use church attendance as a proxy) has a lot less to do with how people are actually behaving and a lot more to do with how people would like others to believe they’re behaving.
What this means is that asking people about their religious practices is a sensitive topic–so sensitive, in fact, that people are willing to lie about it even when they’re asked anonymously. But how do you catch the liars? Or, put another way, how can we assess whether or not respondents are accurately representing their acts, identities, and behaviors on surveys? The item count method is designed to help negotiate sensitive issues in ways that might produce different (and potentially more reliable) information (see here for a review).
Using ICT, a control group and an experimental group are randomly assigned. Rather than responding to a set of survey questions one at a time, the control group is asked to report how many of N items are true for them (where the N items are non-sensitive in nature—like: I remember what I ate for breakfast 10 days ago). In their modified ICT method, the control group is subsequently asked the sensitive question (e.g., “Do you identify as heterosexual?”). The experimental group is presented with a similar survey. Rather than responding to the sensitive question directly, however, the sensitive question is incorporated as a statement into their list of N items—now N+1 items (e.g., “I do not identify as heterosexual.”). They refer to the control group as providing “direct reports” on sensitive questions, while the control group provides “veiled reports.” See below for a comparison of the treatments of the two groups:
Coffman, Coffman, and Erikson explain the value of their modified ICT methodology in this way:
Using this design, a researcher can never perfectly infer an individual’s answer to the sensitive item, so long as a respondent does not report than either 0 or N+1 items are true [and neutral items are selected and tested in ways that produce only a minority of respondents in either of these categories]… The ICT has typically been used to reduce the psychological cost of admitting an unacceptable behavior to an interviewer: Saying “three items” might be easier to say than “Yes, I cheat on my spouse.” (here)
Yet, this study did not involve an interviewer. Respondents were asked to fill out the survey in the privacy of their own homes. In other words, they found that a great deal of information relating to sexual identities, practices, and desires might be subject to social desirability biases “even under conditions of extreme anonymity,” like that provided in a traditional survey asking respondents to directly report sensitive information privately and anonymously.
In the direct report treatment (the control group), roughly 11% of their sample reported that they do not consider themselves to be “heterosexual.” Under the veiled report treatment (the experimental group), this proportion jumped to 19%. Similarly, the proportion of their sample reporting having had a sexual experience with someone of the same sex jumped from 17% (direct report) to 27% (veiled report). They did not find a significant difference in answers to questions regarding same-gender attractions. Though, off-hand, I would suggest that this might be the case because of the wording. I wonder if same-gender “desires” would have produced a different result. (See Coffman, Coffman, and Erikson’s Table III below.)
Their sample is not nationally representative. When they discuss the characteristics of their sample, they state that it is younger, more educated, and more politically liberal than the U.S. population more generally. Because of this, they can’t generalize from these results. But, the focus of their article deals primarily with between-group comparisons of percentage changes in reporting. And here, they find substantial evidence that how people respond when asked directly about the three dimensions of sexuality Laumann and his colleagues outlined so well is dramatically impacted by how they are asked.
The findings are significant for a number of reasons. The one that comes to mind immediately is whether or not we’d really be able to employ this methodology at a national level to collect data on all of the “sensitive questions” we’d like to ask. I suspect this would not be possible. It’s also significant if we base population needs off of inaccurate statistics about the size of different populations. If policies rely on estimates of the size of this population to provide financial assistance (for instance, health interventions, services for the elderly, workplace policies and programs, and programs for youth–e.g., mental health or suicide prevention programs), we might be dramatically under-serving a population in more need than we’re made aware by the data available.
*Class Activity Suggestion: I often ask students to make sense of the zeros on the diagram. For instance, there are 0 women and 3 men in the “identity-only” cell of each diagram. Laumann and his colleagues made sense of the men this way: “No women reported homosexual identity alone. But there were three men who said that they considered themselves homosexual or bisexual even though they did not report desire or partners. This being an unlikely status, it is possible that these men simply misunderstood the categories of self-identification since none of them reported any same-gender experience or interest” (1994: 300). Translation: they found it inconceivable that someone would identify as gay without having a same-gender sexual interaction or at the very least experiencing same-gender sexual desires. This is, in itself, an incredibly powerful statement about the state of heteronormativity and sexual prejudice. Consider a similar Venn diagram for sexuality that dealt only with individuals reporting any dimension associated with heterosexuality. There might still be some zeros (or incredibly low proportions in some cells. But my hunch is that the zeros would congregate in and around “Desire” rather than “Identity.” It would be a different group of people that were culturally unintelligible. Having a class discuss why this is the case is an interesting activity and – in my experience – gets students thinking about power and inequality.
**The Pew Center discusses the study and methodology on their “Fact Tank” blog here.