In response to:
Sex, Lies, and Social Science from the April 20, 1995 issue
To the Editors:
Professor Lewontin lobs grenades down from the high ground of biological science with deadly effects on some of the fatter targets of social science method. Indeed, uncollaborated survey reports about sexual activity and other sensitive matters do deserve limited credence. Consequently our ignorance about private behavior is much greater than social scientists like to pretend. I do hope but do not expect that social scientists will add this book review to their reading lists in quantitative methods.
On a much smaller target, Professor Lewontin’s aim is very slightly awry. Based on a 1-to-1 mapping argument, he states that the average number of heterosexual partners of females should equal the average number for males. Well, actually not. One reason lies in the fact that members of the present cohort can have partners from earlier or later cohorts. As an artificial example, suppose there were equal numbers of males and females with equal life expectancy, each taking a single partner for life, but males mated with older females. Then because of young females not yet partnered, males would have a higher average ratio than females. (In fact, American males do report on average that they lose their virginity at lower ages than females.) Moreover, there are several other confounding influences: there are more females than males in the adult cohorts, females outlive males, and because of population growth newer cohorts are larger than older cohorts. Given that men claim 1.75 as many sex partners as women claim, Professor Lewontin and the books under review are probably correct to infer a severe reporting bias, but rigorous proof awaits a detailed quantitative argument.
And on a target of a middling size, I believe Professor Lewontin is too pessimistic about future possibilities for obtaining reasonably well-founded information about human behavior in private. There are many promising improvements in survey methods (admittedly, rather costly ones) that we have barely begun to try. Thus, while there is evidence that surveys of long term recollections are of limited value, diary and especially snapshot approaches are better (e.g. because they limit opportunities for self-deception). Also, we can sometimes gather data from multiple observers of a single private event (e.g. interviewing both sex partners separately). And we can set up experimental situations designed to bias responses one way or the other, (e.g. using an apparently opinionated interviewer) and see how far the responses can be manipulated.
At the same time we can develop more inside checks on survey data, like the sex partner ratio example discussed above. More usefully, we can develop outside checks, for example calibrated models that work back and forth between micro data about private behavior (e.g. unprotected intercourse) and observable data such as public consequences (e.g. births, abortions, and AIDS cases) or experimentally testable rates (e.g. conceptions per acts of unprotected intercourse). An existing practical example is the comparison of market survey data with eventual sales outcomes so as to estimate the bias in market projections. If and when we finally do find out how to ask the questions in ways that make the survey data consistent with the available public data, then I believe we will have a reasonable warrant to rely on the survey data.
David Burress
Research Economist
Institute for Public Policy and Business Research
University of Kansas
Lawrence, Kansas
R.C Lewontin replies:
Dr. Burress and Professor Stinchcombe both offer alternative explanations for the discrepancy between the number of sex partners reported by men and women in sex surveys. Burress’s cohort effect is another version of one of Laumann et al.’s suggestions, namely that men are having sex with women who are not covered in the survey. In the end, however, he agrees that it is “probably correct to infer a severe reporting bias,” so we do not differ. Stinchcombe, on the other hand, makes the interesting suggestion that men and women define sex acts differently and so cannot be expected to report the same number of partners. Unfortunately for Stinchcombe’s explanation, Laumann et al. have already covered that base. They asked about sex partners twice, once in the printed self-administered questionnaire (SAQ) and once in the face-to-face verbal interview. For the SAQ they did not define a sex act, leaving it to the respondent to understand what was being asked, but in the face-to-face interview they explicitly defined sexual activity as “mutually voluntary activity with another person that involves genital contact and sexual excitement or arousal, that is, feeling really turned on, even if intercourse or orgasm did not occur.” They found the responses in the two cases to be “very close” and regarded the results as a “reassuring finding about the accuracy of the questionnaire itself” (p. 176). Of course, it might be true that, despite instructions to the contrary, men and women still insist on defining sex differently, but that puts a sample survey in even more hot water. Or, perhaps, men and women really do define sex in the same way, but women are much less frequently “really turned on,” and so perceive themselves as having had fewer sex partners. Or perhaps … (“The task of filling in the blanks I’d rather leave to you.”) Stinchcombe is entirely correct that one cannot interpret a sample survey without a theory. The problem is, whose theory? I have, indeed, looked into my own science’s history and find a superabundance of theorizing about anomalies. The problem is not a want of theory but a want of evidence. If scientific advance really came from theorizing, natural scientists would have long ago wrapped up their affairs and gone on to more interesting matters.
The important issue that neither Burress nor Stinchcombe has addressed is not the discrepancy in report, but Laumann et al.’s reaction to it. It is they, not I, who claimed that men exaggerate and women minimize their sexual experiences (an explanation also offered in the French survey which had more to explain). If investigators themselves say that people are not reporting the truth about phenomena, then on what basis do they claim our serious attention to their findings? The willingness of both social and natural scientists to recognize contradictions in their findings, and then to ignore them when it is convenient, is a serious disease of inquiry.
Dr. Burress offers some suggestions for checking on the validity of survey responses, but they do not seem to help us. The idea that diaries will somehow reflect the truth of peoples’ lives is extraordinary. Are diaries not meant for other eyes? Remember the Tolstoys who left their diaries open on each other’s bedside tables. Even when diaries are only a form of talking to oneself, one may engage in an elaborate composition of a self-justificatory autobiography, much of it unconscious. Can he really demonstrate that diaries or even snapshots “limit opportunities for self-deception.” Who took the picture and why? To what extent are our family records of smiling children and indulgent parents in the Piazza San Marco part of our construction of a wished-for life? Burress does not tell us how the records of births, abortions, and AIDS can do more than tell us that some claims of virginity are not to be credited. It is important to distinguish acts that are public or leave public traces from those for which nothing but self-report is available. So, we know that people over-report church attendance because one can actually count the house, and nutritional surveys are notorious for their unreliability because it has been possible to paw through garbage to find out what people really eat. But these examples raise the question of why it is worthwhile to do a sample survey in the first place, if the information can be obtained by direct observation.
What is disturbing in this latest round of letters is the confirmation that some social scientists can be so insensitive to human motivation and behavior. Stinchcombe writes that “nearly everyone thinks it is quite justified to be what they are.” No, Professor Stinchcombe, nearly everyone is deeply apprehensive about what they are and that is why they try to convince themselves that they are something else. Manual workers become “middle class,” and New York Jewish Harvard professors speak with pseudo-English accents. Social science can only be a truly social science when it recognizes that individual lives are not only lived in a social context, but are created in it.
This Issue
June 8, 1995