Literature Summary: Faking in Personnel Selection

Here is a summary of some papers that are related to the topic of faking in personnel selection.

This is a work in progress, will be finished soon

McCrae and Costa (1983)

  • Social desirability (SD) is better interpreted as substantial traits than as indicators of response bias
  • Using SD to correct for response bias should be questioned

Anderson, Warner, and Spencer (1984)

  • Inflation bias is prevalent and pervasive in employment selection
  • Inflation bias is negatively correlated with an external performance measure

Hough, Eaton, Dunnette, and Kamp (1990)

  • validities were in the .20s (uncorrected for unreliability or restriction in range) against targeted criterion constructs
  • Respondents successfully distorted their self-descriptions when instructed to do so
  • Response validity scales were responsive to different types of distortion
  • applicants’ responses did not reflect evidence of distortion
  • validities remained stable regardless of possible distortion by respondents in either unusually positive or negative directions

Holden and Kroner (1992)

  • Test item response times were statistically adjusted to reflect item latencies in relation both to the person and to the item
  • Discriminant function analysis indicated that such times could significantly differentiate among standard responding, faking good responses, and faking bad responses
  • classification hit rates with differential response latencies compared favorably with those rates found with more traditional response dissimulation scales

Schmidt and Ryan (1993)

  • Similar factor structures should not be assumed across testing situations
  • In the current study, a five-factor structure fit the student sample but not the applicant sample
  • There is probably an ideal-employee factor in the applicant sample

Barrick and Mount (1996)

  • In two long-haul trucker samples, C ($\rho = -.26$ and $-.26$) and ES ($\rho = -.23$ and $-.21$) were valid predictors of voluntary turnover
  • C ($\rho = .41$ and $.39$) and ES ($\rho = .23$ and $.27$) were valid predictors of supervisor-rated job performance
  • Applicants did distort their scores on both C and ES scales
  • Distortion occurred both through self-deception and impression management
  • However, neither type of distortion attenuated the predictive validities of either personality construct

Ones, Viswesvaran, and Reiss (1996)

  • Meta-analysis
  • Social desirability scales were found not to predict school success, task performance, counterproductive behaviors, and job performance
  • social desirability is not as pervasive a problem as has been anticipated by industrial-organizational psychologists
  • social desirability is in fact related to real individual differences in emotional stability and conscientiousness
  • social desirability does not function as a predictor, as a practically useful suppressor, or as a mediator variable for the criterion of job performance
  • Removing the effects of social desirability from the Big Five dimensions of personality leaves the criterion-related validity of personality constructs for predicting job performance intact

Zickar and Drasgow (1996)

  • Appropriateness measurement: quantifies the difference between an examinee’s observed pattern of item responses to responses expected on the basis of that person’s standing on the latent trait 0 and a set of item response functions (IRFs), as specified by some IRT model. IRFs are functions that relate 0 to the probability of affirming an item. An examinee whose pattern of responses greatly differs from the expected pattern of responses will have an extreme appropriateness index
  • The item response theory approach (appropriateness measurement) classified a higher number of faking respondents at low rates of misclassification of honest respondents (false positives) than did a social desirability scale
  • At higher false positive rates, the social desirability approach did slightly better

Hough (1998)

  • Strategy 1: “correcting” an individual’s content scale scores based on the individual’s score on an Unlikely Virtues (UV) scale
  • Strategy 2: removing people from the applicant pool because their scores on an UV scale suggest they are presenting themselves in an overly favorable way
  • Incumbent and applicant data from three large studies were used to evaluate the two strategies. The data suggest that
    • (a) neither strategy affects criterion-related validities
    • (b) both strategies produce applicant mean scores for content scales that are closer to incumbent mean scores
    • (c) men, women, Whites, and minorities are not differentially affected
    • (d) both strategies result in a subset of people who are not hired who would otherwise have been hired
  • If one’s goal is to reduce the impact of intentional distortion on hiring decisions, both strategies appear reasonably effective

Snell, Sydell, and Lueke (1999)

  • Proposed an interactional model of applicant faking based on individual differences
  • Successful faking involves:
    • Ability to fake
      • Dispositional factors: GMA (e.g., Jensen, 1998), EI (e.g., Mayer & Salovey, 1997)
      • Experiential factors
      • Test characteristics: item type, item format, item scoring
    • Motivation to fake
      • Demographic factors: age, gender (these are probably moderators, not predictors)
      • Dispositional factors: impression management, integrity, Machiavellianism, manipulativeness, organizational delinquency, locus of control, stage of cognitive moral development
      • Perceptual factors: others’ behavior, others’ attitudes, fairness

Viswesvaran and Ones (1999)

  • The authors examined whether individuals can fake their responses to a personality inventory if instructed to do so
  • Between-subjects and within-subject designs were meta-analyzed separately
  • Across 51 studies, fakability did not vary by personality dimension, all the Big Five factors were equally fakable
    • When instructed to fake good, participant were able to change their responses by almost half a standard deviation on average
  • Faking produced the largest distortions in social desirability scales
  • Instructions to fake good produced lower effect sizes compared with instructions to fake bad
  • Within-subjects designs produce more accurate estimates
  • Between-subjects designs may distort estimates due to Subject x Treatment interactions and low statistical power
  • An avenue for fruitful future research lies in investigating whether individual differences in fakability contribute valid variance to the criterion of interest (e.g., job performance). For example,to the extent that fakability reflects social intelligence or some form of adaptability, individual differences in fakability may contribute to explaining successful job performance, especially in some occupations such as salespersons, politicians, customer service representatives, and so forth

Zickar and Robie (1999)

  • Military recruits were instructed to complete a personality inventory under 1 of 3 conditions: answer honestly, fake good, or fake good with coaching
  • A graded response model (F. Samejima, 1969) was fit to items from 3 personality scales
  • Although there was a large difference in latent personality trait scores because of faking, there were few differences in the functioning of items across conditions
  • Results of confirmatory factor analyses suggest that faking leads to an increase in common variance that was unrelated to substantive construct variance

Jackson, Wroblewski, and Ashton (2000)

  • Evaluated the effects of faking on mean scores and correlations with self-reported counterproductive behavior of integrity-related personality items administered in single-stimulus and forced-choice formats
  • In laboratory studies, respondents instructed to respond as if applying for a job scored higher than when given standard or “straight-take” instructions - - The size of the mean shift was nearly a full standard deviation for the single-stimulus integrity measure, but less than one third of a standard deviation for the same items presented in a forced-choice format
  • The correlation between the personality questionnaire administered in the single-stimulus condition and self-reported workplace delinquency was much lower in the job applicant condition than in the straight-take condition, whereas the same items administered in the forced-choice condition maintained their substantial correlations with workplace delinquency

Piedmont, McCrae, Riemann, and Angleitner (2000)

  • The authors evaluated the utility of several types of validity scales in a volunteer sample of 72 men and 106 women who completed the Revised NEO Personality Inventory (NEO-PI-R; P. T. Costa & R. R. McCrae, 1992) and the Multidimensional Personality Questionnaire (MPQ; A. Tellegen, 1978/1982) and were rated by 2 acquaintances on the observer form of the NEO-PI-R
  • Analyses indicated that the validity indexes lacked utility in this sample
  • A partial replication (N = 1,728) also failed to find consistent support for the use of validity scales
  • The authors illustrate the use of informant ratings in assessing protocol validity and argue that psychological assessors should limit their use of validity scales and seek instead to improve the quality of personality assessments

Ferrando and Chico (2001)

  • The present study examined whether an internal procedure for assessing the scalability of the response patterns, based on item response theory (IRT), can detect deliberate dissimulation (faking good) in the Extraversion, Neuroticism, and Psychoticism scale scores of the Eysenck Personality Questionnaire Revised
  • The procedure is compared to the traditional approaches, which use the Lie and the Social Desirability (SD) scales
  • A data set was analyzed in which participants were either administered the measures in standard conditions or given special instructions to fake good
  • The results showed that the IRT-based measures were not powerful enough to detect dissimulation, whereas the Lie and SD scales performed much better

Holden, Wood, and Tomashewski (2001)

  • Response time restriction as a method for reducing the influence of faking on personality scale validity
  • No evidence emerged to indicate that limiting respondents’ answering time can attenuate the effects of faking on validity
  • Results of the three current experiments indicate that limiting response time does not prevent or reduce the effect of faking on the validity of self-report personality scales
  • Results were interpreted as failing to support a simple model of personality test item response dissimulation that predicts that lying takes time
  • Findings were consistent with models implying that lying involves primitive cognitive processing or that lying may be associated with complex processing that includes both primitive responding and cognitive overrides

Stark, Chernyshenko, Chan, Lee, and Drasgow (2001)

(Note: most of the bullet points were excepts from the papers, especially from the abstracts.)

Reference

Anderson, C. D., Warner, J. L., & Spencer, C. C. (1984). Inflation bias in self-assessment examinations: Implications for valid employee selection. Journal of Applied Psychology, 69(4), 574-580.

Anglim, J., Lievens, F., Everton, L., Grant, S. L., & Marty, A. (2018). HEXACO personality predicts counterproductive work behavior and organizational citizenship behavior in low-stakes and job applicant contexts. Journal of Research in Personality, 77, 11-20.

Arthur Jr, W., Glaze, R. M., Villado, A. J., & Taylor, J. E. (2010). The magnitude and extent of cheating and response distortion effects on unproctored internet‐based tests of cognitive ability and personality.International Journal of Selection and Assessment, 18(1), 1-16.

Barrick, M. R., & Mount, M. K. (1991). The big five personality dimensions and job performance: A meta-analysis. Personnel Psychology, 44(1), 1-26.

Barrick, M. R., & Mount, M. K. (1996). Effects of impression management and self-deception on the predictive validity of personality constructs. Journal of Applied Psychology, 81(3), 261-272.

Berry, C. M., & Sackett, P. R. (2009). Faking in personnel selection: Tradeoffs in performance versus fairness resulting from two cut‐score strategies. Personnel Psychology, 62(4), 833-863.

Birkeland, S. A., Manson, T. M., Kisamore, J. L., Brannick, M. T., & Smith, M. A. (2006). A meta‐analytic investigation of job applicant faking on personality measures. International Journal of Selection and Assessment, 14(4), 317-335.

Burns, G. N., & Christiansen, N. D. (2011). Methods of measuring faking behavior. Human Performance, 24(4), 358-372.

Böckenholt, U. (2014). Modeling motivated misreports to sensitive survey questions. Psychometrika, 79(3), 515-537.

Cao, M., & Drasgow, F. (2019). Does forcing reduce faking? A meta-analytic review of forced-choice personality measures in high-stakes situations. Journal of Applied Psychology, 104(11), 1347–1368.

Converse, P. D., Peterson, M. H., & Griffith, R. L. (2009). Faking on personality measures: Implications for selection involving multiple predictors. International Journal of Selection and Assessment, 17(1), 47-60.

Cucina, J. M., Vasilopoulos, N. L., Su, C., Busciglio, H. H., Cozma, I., DeCostanza, A. H., … & Shaw, M. N. (2019). The effects of empirical keying of personality measures on faking and criterion-related validity. Journal of Business and Psychology, 34(3), 337-356.

Dwight, S. A., & Donovan, J. J. (2003). Do warnings not to fake reduce faking?. Human Performance, 16(1), 1-23.

Donovan, J. J., Dwight, S. A., & Hurtz, G. M. (2003). An assessment of the prevalence, severity, and verifiability of entry-level applicant faking using the randomized response technique. Human Performance, 16(1), 81-106.

Dunlop, P., Mcneill, I., & Jorritsma, K. (2016). Tailoring the Overclaiming Technique to Capture Faking Behaviour in Applied Settings: A Field Study of Firefighter Applicants. International Journal of Psychology, 51, 792-792. [no fulltext]

Feeney, J. R., & Goffin, R. D. (2015). The overclaiming questionnaire: A good way to measure faking?. Personality and Individual Differences, 82, 248-252.

Ferrando, P. J., & Chico, E. (2001). Detecting dissimulation in personality test scores: A comparison between person-fit indices and detection scales. Educational and Psychological Measurement, 61(6), 997-1012.

Fine, S., & Pirak, M. (2016). Faking fast and slow: Within-person response time latencies for measuring faking in personnel testing. Journal of Business and Psychology, 31(1), 51-64.

Goffin, R. D., & Boyd, A. C. (2009). Faking and personality assessment in personnel selection: Advancing models of faking. Canadian Psychology, 50(3), 151-160.

Griffith, R. L., Chmielowski, T., & Yoshita, Y. (2007). Do applicants fake? An examination of the frequency of applicant faking behavior. Personnel Review, 36(3), 341-355.

Griffith, R. L., Lee, L. M., Peterson, M. H., & Zickar, M. J. (2011). First dates and little white lies: A trait contract classification theory of applicant faking behavior. Human Performance, 24(4), 338-357.

Griffith, R. L., & Peterson, M. H. (2011). One piece at a time: The puzzle of applicant faking and a call for theory. Human Performance, 24(4), 291-301.

Hogan, J., Barrett, P., & Hogan, R. (2007). Personality measurement, faking, and employment selection. Journal of Applied Psychology, 92(5), 1270-1285.

Holden, R. R., & Kroner, D. G. (1992). Relative efficacy of differential response latencies for detecting faking on a self-report measure of psychopathology. Psychological Assessment, 4(2), 170–173.

Holden, R. R., Wood, L. L., & Tomashewski, L. (2001). Do response time limitations counteract the effect of faking on personality inventory validity? Journal of Personality and Social Psychology, 81(1), 160–169.

Hough, L. M. (1998). Effects of intentional distortion in personality measurement and evaluation of suggested palliatives. Human Performance, 11(2-3), 209-244.

Hough, L. M., Eaton, N. K., Dunnette, M. D., Kamp, J. D., & McCloy, R. A. (1990). Criterion-related validities of personality constructs and the effect of response distortion on those validities. Journal of Applied Psychology, 75(5), 581–595.

Jackson, D. N., Wroblewski, V. R., & Ashton, M. C. (2000). The impact of faking on employment tests: Does forced choice offer a solution?. Human Performance, 13(4), 371-388.

Kluger, A. N., & Colella, A. (1993). Beyond the mean bias: The effect of warning against faking on biodata item variances. Personnel Psychology, 46(4), 763-780.

König, C. J., Merz, A. S., & Trauffer, N. (2012). What is in applicants’ minds when they fill out a personality test? Insights from a qualitative study. International Journal of Selection and Assessment, 20(4), 442-452.

Komar, S., Brown, D. J., Komar, J. A., & Robie, C. (2008). Faking and the validity of conscientiousness: A Monte Carlo investigation. Journal of Applied Psychology, 93(1), 140-154.

Kuncel, N. R., & Borneman, M. J. (2007). Toward a new method of detecting deliberately faked personality tests: The use of idiosyncratic item responses. International Journal of Selection and Assessment, 15(2), 220-231.

Levashina, J., & Campion, M. A. (2007). Measuring faking in the employment interview: development and validation of an interview faking behavior scale. Journal of Applied Psychology, 92(6), 1638-1656.

Marcus, B. (2009). ‘Faking’From the Applicant’s Perspective: A theory of self‐presentation in personnel selection settings. International Journal of Selection and Assessment, 17(4), 417-430.

McCrae, R. R., & Costa, P. T. (1983). Social desirability scales: More substance than style. Journal of Consulting and Clinical Psychology, 51(6), 882–888.

McFarland, L. A., & Ryan, A. M. (2006). Toward an integrated model of applicant faking behavior. Journal of Applied Social Psychology, 36(4), 979-1016.

McLarnon, M. J., DeLongchamp, A. C., & Schneider, T. J. (2019). Faking it! Individual differences in types and degrees of faking behavior. Personality and Individual Differences, 138, 88-95.

Meade, A. W., Pappalardo, G., Braddy, P. W., & Fleenor, J. W. (2020). Rapid response measurement: Development of a faking-resistant assessment method for personality. Organizational Research Methods, 23(1), 181-207.

Mueller-Hanson, R. A., Heggestad, E. D., & Thornton, G. C. (2006). Individual differences in impression management: An exploration of the psychological processes underlying faking. Psychology Science, 48(3), 288-312.

Ones, D. S., Viswesvaran, C., & Reiss, A. D. (1996). Role of social desirability in personality testing for personnel selection: The red herring. Journal of Applied Psychology, 81(6), 660-679.

Paulhus, D. L., Harms, P. D., Bruce, M. N., & Lysy, D. C. (2003). The over-claiming technique: Measuring self-enhancement independent of ability. Journal of Personality and Social Psychology, 84(4), 890–904.

Peterson, M. H., Griffith, R. L., Isaacson, J. A., O’Connell, M. S., & Mangos, P. M. (2011). Applicant faking, social desirability, and the prediction of counterproductive work behaviors. Human Performance, 24(3), 270-290.

Piedmont, R. L., McCrae, R. R., Riemann, R., & Angleitner, A. (2000). On the invalidity of validity scales: Evidence from self-reports and observer ratings in volunteer samples. Journal of Personality and Social Psychology, 78(3), 582–593.

Pavlov, G., Maydeu-Olivares, A., & Fairchild, A. J. (2019). Effects of applicant faking on forced-choice and Likert scores. Organizational Research Methods, 22(3), 710-739.

Schermer, J. A., Holden, R. R., & Krammer, G. (2019). The general factor of personality is very robust under faking conditions. Personality and Individual Differences, 138, 63-68.

Schmit, M. J., & Ryan, A. M. (1993). The Big Five in personnel selection: Factor structure in applicant and nonapplicant populations. Journal of Applied Psychology, 78(6), 966–974.

Snell, A. F., Sydell, E. J., & Lueke, S. B. (1999). Towards a theory of applicant faking: Integrating studies of deception. Human Resource Management Review, 9(2), 219-242.

Stark, S., Chernyshenko, O. S., Chan, K.-Y., Lee, W. C., & Drasgow, F. (2001). Effects of the testing situation on item responding: Cause for concern. Journal of Applied Psychology, 86(5), 943–953.

Suchotzki, K., Verschuere, B., Van Bockstaele, B., Ben-Shakhar, G., & Crombez, G. (2017). Lying takes time: A meta-analysis on reaction time measures of deception. Psychological Bulletin, 143(4), 428–453.

Viswesvaran, C., & Ones, D. S. (1999). Meta-analyses of fakability estimates: Implications for personality measurement. Educational and Psychological Measurement, 59(2), 197-210.

Zickar, M. J., & Drasgow, F. (1996). Detecting faking on a personality instrument using appropriateness measurement. Applied Psychological Measurement, 20(1), 71-87.

Zickar, M. J., Gibby, R. E., & Robie, C. (2004). Uncovering faking samples in applicant, incumbent, and experimental data sets: An application of mixed-model item response theory. Organizational Research Methods, 7(2), 168-190.

Zickar, M. J., & Robie, C. (1999). Modeling faking good on personality items: An item-level analysis. Journal of Applied Psychology, 84(4), 551-563.