Wednesday, August 14, 2024

Social Desirability Biased Responding: Are Researchers Listening? - Juniper Publishers

 

Psychology and Behavioral Science - Juniper Publishers

Abstract

In surveys that rely on self-reported behaviors, under-reporting and over-reporting is common and often extreme. This includes surveys of energy intake, body weight, use of drugs and alcohol, sexual behaviors, and political surveys. In one national study that compared respondents’ answers with actual measurements, nearly 40% of high school students over-reported their heights by 3 inches or more. The major cause of misreporting is social desirability biased responding, which refers to respondents who consciously misreport to make themselves look better. Having respondents answer questions anonymously is only minimally effective in reducing misreporting. Reliable survey tools for assessing social desirability have been available for decades, yet researchers who rely on self-reported behaviors continue to ignore the issue. In recent years, fewer than 5% of survey studies published in sexuality, ethics, and accounting journals had controlled for social desirability biased responding. Public policy can only be as effective as the truthfulness of the data on which it is based. Researchers conducting survey research are urged to include a measure of social desirability bias in their experimental design.

Keywords: Survey Research; Self-reported Behaviors; Misreporting; Social Desirability; Test Validity; Political surveys

Misreporting in Health and Behavioral Science Research

The Centers for Disease Control and Prevention (CDC) biennially conducts a survey of U.S. high school students’ risky behaviors titled the Youth Risk Behavior Surveillance (YRBS) [1]. These include self-reported use of alcohol, drugs and tobacco, as well as dietary and sexual behaviors. However, in a study of the validity of their own results, CDC researchers took actual measurements after the survey and found that high school students, on average, over-reported their heights by 2.7 inches, with 39.5% over-reporting by at least 3 inches [2]. Many students under-reported their body weight. The net effect was that for 12.7% of students, body mass index was under-reported by at least 5 kg//m2.

Numerous other studies using the gold standard (actual measurements vs. self-reports) have also found that many people, not just high school students, under-report their energy intake and body weight, often by 30% or more, and over-report their height [3-13]. In one study, up to 14% of people under- reported their energy intake to such an extent they were called “extreme under-reporters” [14]. The under-reporting of energy intake and body weight is so common and often extreme that one group of researchers concluded that self-reports of energy intake are “fundamentally and fatally flawed” [4, p. 911]. Another researcher called these self-reported data “implausible” [15]. The misreporting is only minimally due to bad memory [4]. For example, many adults with obesity also under-report on inventories of high-calorie foods in their homes [16]. Instead, there is “robust evidence of social desirability bias” [6, p. 198].

Social desirability biased responding refers to “the need of [individuals] to obtain approval by responding in a culturally appropriate manner” [17, p. 353]. The higher one’s level of social desirability, the more likely he or she is to over-report desirable behaviors and under-report undesirable behaviors on surveys of personal behaviors. Concerns about social desirability biased responding were first expressed over 90 years ago [18]. The component of social desirability biased responding that is of greatest concern to researchers is called impression management which refers to respondents who consciously under- or overreport to make themselves look better [19]. The degree of misreporting depends on the sensitivity of the issue, mode of data collection and interviewer’s characteristics (e.g., face-to-face interview versus anonymous testing), and wording of questions [20-21].

Several studies have observed statistically significant correlations between degree of under-reporting of energy intake and level of social desirability [22-27]. Under-reporting smoking, use of alcohol and illicit drugs, and adolescent reckless driving is common and are also significantly correlated with social desirability [28-33].

Studies have found that social desirability response bias also affects self-reports about HIV serostatus and risky sexual behaviors (e.g., receptive anal intercourse) [34-36]. Men overreport their use of condoms [37-39] and erect penis size [40] and under-report their engagement in extramarital affairs [41] and these, too, are associated with high social desirability scores [36, 40-41].

An early sexuality study found that respondents’ answers changed when they were told that questions were going to be repeated while taking a polygraph test [42]. In another study, over half of adolescents denied having ever had a sexually transmitted infection, yet hospital records indicated that they had been treated [43]. There is ample evidence of under- or over-reporting for many other sexual behaviors [44]. In health research, qualitative studies are “very susceptible” to social desirability biased responding [45]. Deliberate misreporting is also common in the behavioral sciences (unrelated to health). For example, in the field of political science, self-reported voter turnout in national elections has far exceeded actual turnout for decades [46-48]. Educated people who express an interest in politics are the most likely to over-report. The over-reporting of voter turnout is attributed to socially desirable responding [49]. In the 2016 presidential election, people who were more likely to comply with social science norms were less likely to show support for Trump in preelection polls, yet many obviously voted for him [50].

On ballot measures regarding same-sex marriage, opposition on election day is 5% to 7% greater than is found in preelection polls [51].

Social desirability bias also affects expressed racial attitudes in political surveys [52] and attitudes about restrictive immigration [53]. Social desirability biased responding is found in many cultures. For example, in low-income African countries, men are more likely to oppose women’s political rights when they are interviewed by a man [54]. In another African study, people gave different answers depending on whether the interviewer was from the same ethnic group [55]. Among non-pregnant Indian women, self-reported use of smokeless tobacco was found to be 20.6% lower when interviews were done while their husband was present [56].

In summary, conscious misreporting for self-reported behaviors is common and frequently extreme. In a review of anthropology studies, respondent misreporting was called “a well-kept open secret” [57, p. 504].

Assessing for Social Desirability Biased Responding

Researchers who conduct surveys have long assumed that if respondents are allowed to answer questions anonymously, they will answer honestly. However, studies have shown that answering questions anonymously only minimally reduces social desirability biased responding [58]. The CDC’s YRBS has respondents answer questions anonymously yet recall that 39.5% of high-school students over-reported their height by at least 3 inches [2]. How likely is it that these same individuals were truthful when answering sensitive questions about their use of drugs and alcohol, and their experiences with risky sexual behaviors? Today, un-proctored computer-assisted self-administered techniques have replaced the standard paper-and-pencil survey, but a metaanalysis of these studies found that they are no better at reducing social desirability biased responding [59].

There are some excellent scales to measure social desirability bias. The most widely used is the Marlowe-Crowne scale, a 33- item scale that can be used in all fields of research [17]. For brevity, a 13-item short form is available [60]. There is also a 20-item scale developed specifically to measure impression management, the Balanced Inventory of Desirable Responding [61], but recent research shows that the Marlowe-Crowne Scale may still be superior [62]. It is not the intent of this paper to assess which technique is better, but instead to acknowledge that several techniques are available to researchers to assess social desirability bias.

Regardless of which measurement tool is used, logistic regression can be used to adjust raw scores [63]. In brief, the researcher measures “socially desirable response tendency [e.g., using the Marlowe-Crowne Scale] alongside a measure of interest and then adjusts raw scores on that measure by an amount commensurate with the degree of socially desirable responding” [34, p. 97].

Are Researchers Listening?

The CDC says that “educators, parents, local decision makers…...use YRBSS data to…...develop local and state policy” [64, p. 1]. However, policy can only be as effective as the truthfulness of the data on which it is based. While the CDC makes a great effort to obtain a nationally representative sample, it gives only passing mention to its previous findings of extreme under- and over-reporting (“the extent….cannot be determined,” p. 11). The CDC’s research group knows their respondents’ answers are often untruthful but continues to present them as fact

The YRBS is just one example of large nationally representative surveys that ask sensitive questions but include no measure of social desirability bias. Others include the National Health andNutrition Examination Survey, National Survey of Family Growth, and National Survey of Sexual Health and Behavior.

Tests for social desirability bias are equally applicable to smaller studies using convenience samples, but these authors have also generally ignored the possibility of respondents underand/ or over-reporting. In a recent study, it was found that fewer than 5% of survey studies in accounting-and-ethics research had controlled for social desirability biased responding [65]. Surveys used by sexuality researchers almost always include personal and sensitive items for which answers cannot be authenticated by the gold standard. An examination by this author of survey studies (excluding interviews) published in The Journal of Sex Research for the three-year period 2022 through 2024 revealed that only 3.6% of studies employed a measure of social desirability responding or authentication of self-reported behaviors. Similarly, an examination of papers published in the same three-year period in American Journal of Sexuality Education and Sex Education, journals that publish studies of the effects of teaching sexuality education on behaviors, attitudes and opinions, revealed that none of 46 papers using surveys included a measure of social desirability responding.

Conclusion and Recommendation

After decades of warnings about social desirability biased responding on surveys [17-18, 20-21, 44, 57] and the development of several methods to assess such bias [17, 60-61], there appears to be little concern by researchers using surveys about the truthfulness of their respondents’ answers. Left to themselves, researchers continue to present self-reported behaviors as factual data. If change is to occur, editors of journals must begin to urge that researchers using surveys of self-reported behaviors include a measurement of social desirability biased responding in their experimental design.


To Know more about Psychology and Behavioral Science International Journal
Click here: https://juniperpublishers.com/pbsij/index.php

To Know more about our Juniper Publishers
Click here: https://juniperpublishers.com/index.php



No comments:

Post a Comment

Artificial Intelligence System for Value Added Tax Collection via Self Organizing Map (SOM)- Juniper Publishers

  Forensic Sciences & Criminal Investigation - Juniper Publishers Abstract Findings:  Based on our experiments, our approach is an effec...