home » assessment » malingering research update
Malingering Research Update
This resource was developed to help clinicians, forensic practitioners, expert witnesses, attorneys, researchers, and others—particularly those without adequate physical or financial access to professional libraries—keep up with the constantly emerging research relevant to assessing malingering, faking bad, and symptom exaggeration.
I created this site to be fully accessible for people with disabilities; please follow this link to change text size, color, or contrast; please follow this link for other accessibility functions for those with visual, mobility, and other disabilities.
NOTE: For those interested, here are some related resources on this website:
- Deposition and Cross-examination Questions on Psychological Tests & Psychometrics
- 10 Fallacies in Psychological Assessment
- Forensic Assessment Checklist
- Sample Agreement Between Expert Witness & Attorney
- Responsibilities in Providing Psychological Test Feedback to Clients
- Practice Guidelines & Ethics Codes for Assessment, Forensics, Counseling, & Therapy
- Assessing Suicide Risk: 21 factors, 10 steps to reduce risk, &16 experts identify avoidable pitfalls
Below are citations and brief -- just enough to give a sense of whether the research might be relevant to your interests -- summaries of studies and review articles that have been published in prominent peer-reviewed scientific and professional journals from January, 2001, to the present. They examine tests developed specifically to identify malingering, tests that include indices or subscales that might be useful in detecting malingering, and tests that may be vulnerable to malingering.
The articles are categorized according to the tests they address. The current categories are:
ADHD Behavior Checklist
ADHD Rating Scale
Amsterdam Short Term Memory Test (ASTM) - English version
Assessment of Depression Inventory (ADI)
Atypical Presentation Scale (AP)
Auditory Verbal Learning Test (AVLT)
Basic Personality Inventory
Benton Facial Recognition Test
Benton Visual Form Discrimination (VFD)
Booklet Category Test
California Verbal Learning Test (CVLT)
Category Test (see Halstead-Reitan Neuropsychological Test Battery)
Cognitive Behavioral Driver's Inventory (CBDI)
Computerized Assessment of Response Bias (CARB)
Computerized Dot Counting Test (CDCT)
Computerized Tests of Information Processing (CTIP)
Conditional Reasoning Tests (CRT)
Conners' Continuous Performance Test-II
Criminal Offender Infrequency Scale
Depression, Anxiety and Stress Scales (DASS-21)
Diagnostic and Statistical Manual, Fourth Edition (DSM-IV)
Digit Memory Test
Digit Recognition Test
Dot Counting Task (DCT)
Evaluation of Competency to Stand Trial - Revised (ECST-R)
Fake Bad Scale (FBS)
Forced Choice
General Reviews & Issues
Glasgow Coma Scale
Gudjonsson Suggestibility Scales
Halstead-Reitan Neuropsychological Test Battery
HEXACO Personality Inventory-Revised (HEXACO-PI-R)
Hooper Visual Organization Test
Immediate Post-Concussion Assessment and Cognitive Testing (ImPACT)
Infrequency Psychopathology Scale
Integrated Visual & Auditory Continuous Performance Test (IVACPT)
Judgment of Line Orientation Test (JLO)
Luria-Nebraska
M Test
Malingering Probability Scale
McGill Pain Questionnaire (MPQ)
Medical Symptom Validity Test (MSVT)
Megargee's Criminal Offender Infrequency Scale
Miller Forensic Assessment of Symptoms Test (M-FAST)
Millon Clinical Multiaxial Inventory—Third Edition (MCMI-III)
Minnesota Multiphasic Personality Inventory - Adolescent (MMPI-A)
Minnesota Multiphasic Personality Inventory - 2 (MMPI-2)
Mississippi Scale for Combat-Related Posttraumatic Stress Disorder
Modified Somatic Perception Questionnaire (MSPQ)
Morel Emotional Numbing Test - Revised (MENT-R)
Multidimensional Investigation of Neuropsychological Dissimulation (MIND)
Neuropsychological Symptom Inventory (NSI)
Nonverbal Medical Symptom Validity Test (NV-MSVT)
Pain Disability Index (PDI)
Patient Pain Profile (P3)
Paulhus Deception Scales (PDS)
Personality Assessment Inventory (PAI)
Portland Digit Recognition Test (PDRT)
Psychological Inventory of Criminal Thinking Styles (PICTS)
Quick Test for Posttraumatic Stress Disoder
Rarely Missed Index (see Wechsler Memory Scale)
Raven's Standard Progressive Matrices (RSPM)
Recognition Memory Test
Reliable Digit Span
Rey Malingering Tests
Rivermead Questionnaire
Rogers Discrimination Function
Rorschach
Seashore Rhythm Test (SRT) (see Halstead-Reitan Neuropsychological Test Battery)
Sixteen PF
Slick Criteria for Malingered Neurocognitive Dysfunction
Speech Sounds Perception Test (SSPT) (see Halstead-Reitan Neuropsychological Test Battery)
Stanford-Binet-Revised
Stroop
Structured Interview of Reported Symptoms (SIRS)
Structured Inventory of Malingered Symptomatology (SIMS)
Symptom Validity Scale (SVS)
Test of Memory Malingering (TOMM)
Test of Malingered Incompetence (TOMI)
Test of Variable Attention (TOVA)
Trail Making Test
Trauma Symptom Inventory (TSI)
Validity Indicator Profile
Victoria Symptom Validity Test (VSVT)
Wechsler Adult Intelligence Scale
Wechsler Memory Scale
Wisconsin Card Sorting Test (WCST)
Word Completion Memory Test (WCMT)
Word Memory Test (WMT)
ADHD Behavior Checklist
"Detection of malingering in assessment of adult ADHD" by Colleen Quinn. Archives of Clinical Neuropsychology, May, 2003, pages 379-395.
Compared the ADHD Behavior Checklist and the Integrated
Visual and Auditory Continuous Performance Test (IVACPT) in distinguishing
among 3 groups of undergraduates: (a) those with ADHD, (b) those without
ADHD attempting to feign ADHD, and (c) those without ADHD serving as control
group. "Analyses indicated that the ADHD Behavior Rating Scale was successfully
faked for childhood and current symptoms. IVA CPT could not be faked on
81% of its scales. The CPT's impairment index results revealed: sensitivity
94%, specificity 91%, PPP 88%, NPP 95%."
ADHA Rating Scale
"Detection of feigned ADHD in college students" by Sollman, M. J., J. D. Ranseen, et al. Psychological Assessment, 2010, 22(2), pages 325-335.
Summary: “The performance of 31 undergraduates financially motivated and coached about ADHD via Internet-derived information was compared to that of 29 ADHD undergraduates following medication washout and 14 students not endorsing symptomatology. Results indicated malingerers readily produced ADHD-consistent profiles. Symptom checklists, including the ADHD Rating Scale and Conners's Adult ADHD Rating Scale–Self-Rating Form: Long, were particularly susceptible to faking. Conners's Continuous Performance Test—II findings appeared more related to motivation than condition. Promising results were seen with all cognitive SVTs (Test of Memory Malingering [TOMM], Digit Memory Test, Letter Memory Test, and Nonverbal–Medical Symptom Validity Test), particularly TOMM Trial 1 when scored using Trial 2 criteria. All SVTs demonstrated very high specificity for the ADHD condition and moderate sensitivity to faking, which translated into high positive predictive values at rising base rates of feigning. Combining 2 or more failures resulted in only modest declines in sensitivity but robust specificity. Results point to the need for a thorough evaluation of history, cognitive and emotional functioning, and the consideration of exaggerated symptomatology in the diagnosis of ADHD.”
Amsterdam Short Term Memory Test (ASTM) - English version
"Cognitive underperformance and symptom over-reporting in a mixed psychiatric sample" by Dandachi-FitzGerald, B., R. W. H. M. Ponds, et al. The Clinical Neuropsychologist, 2011, 25(5), pages 812-828.
Summary: “The current study examined the prevalence of cognitive underperformance and symptom over-reporting in a mixed sample of psychiatric patients.... A total of 34% of them failed the ASTM, the SIMS or both tests. ASTM and SIMS scores were significantly, albeit modestly, correlated with each other.... As to the links between underperformance, over-reporting, neuropsychological tasks, and the SCL-90, the association between over-reporting on the SIMS and SCL-90 scores was the most robust one. The subsample that only failed on the ASTM performed significantly worse on a compound index of memory performance. Our findings indicate that underperformance and over-reporting are loosely coupled dimensions and that particularly over-reporting is intimately linked to heightened SCL-90 scores.”
"Comparison of three tests to detect feigned amnesia: The effects of feedback and the measurement of response latency" by Barbara Bolan, Jonathan Foster, Ben Schmand, & Steve Bolan. Journal of Clinical & Experimental Neuropsychology, April, 2002, pages 154-167.
Summary: Reported 3 experiments
assessing the effectiveness of an English version of the "Amsterdam Short
Term Memory Test (ASTM test) developed to detect feigned memory impairment.
. . . Using a simulation design, the ASTM test compared favorably with
the Test of Memory Malingering (TOMM) and appeared better than a newly-devised
Digit Recognition Test (DRT)."
"Detection of feigned cognitive dysfunction using special malinger tests: A simulation study in naïve and coached malingerers" by M. Jelicic, H. Merckelbach, I. Candel, & E. Geraerts. International Journal of Neuroscience, August, 2007, Vol 117, #8, pages 1185-1192.
Summary: "Before the both instruments were administered, naïve
malingerers received no further information (n = 30), whereas coached malingerers
were given some information about brain injury and a warning not to exaggerate
symptoms (n = 30). Both tests correctly classified 90% of the naïve
malingerers. The ASTM detected 70% of the coached malingerers, whereas
the SIMS continued to detect 90% of them. The findings suggest that coaching
undermines the diagnostic accuracy of the ASTM, but does not seem to influence
the accuracy of the SIMS."
"Detection of feigned crime-related amnesia: A multi-method approach" by Giger, P., T. Merten, et al. Journal of Forensic Psychology Practice, 2010, 10(5), pages 440-463.
Summary: “Sixty participants were assigned to three conditions: responding honestly; feigning crime-related amnesia; feigning amnesia with a warning not to exaggerate. High sensitivity and specificity were obtained for the Structured Inventory of Malingered Symptomatology, the Amsterdam Short-Term Memory Test, and the Morel Emotional Numbing Test. Only three warned malingerers went undetected. The results demonstrate that validated instruments exist to support forensic decision making about crime-related amnesia. Yet, warning may undermine their effectiveness, even when using a multi-method approach.”
Assessment of Depression Inventory (ADI)
"Utility of the Structured Inventory of Malingered Symptomatology (SIMS) and the Assessment of Depression Inventory (ADI) in screening for malingering among outpatients seeking to claim disability" by Carl Clegg, William Fremouw, & Neil Mogge. Journal of Forensic Psychiatry & Psychology, April, 2009, vol. 20, #2, pp. 239-254.
Summary: "A sample of 56 disability seekers were administered the Structured Interview of Reported Symptoms (SIRS), the Structured Inventory of Malingered Symptomatology (SIMS), and the Assessment of Depression Inventory (ADI). Individuals were classified as honest or suspected malingerers based on their SIRS scores. Additionally, 60 individuals from the community completed the SIMS and the ADI honestly or as if they were malingering depression. Both malingering groups had significantly higher mean scores on the SIMS total and ADI feigning scales than both honest groups. The scores of the malingering groups did not significantly differ. The utility of various cut-off scores on these scales is presented and discussed. In the clinical sample, previously recommended SIMS total cut-off scores (>14 or >16) had excellent sensitivity, but low specificity. Conversely, the recommended ADI feigning cut-off score (>13) had excellent specificity, but low sensitivity. Increasing the SIMS total cut-off score to >19 and decreasing the ADI feigning cut-off score to >9 may improve their utility in screening for malingering among outpatients seeking to claim disability."
Atypical Presentation Scale (AP)
"Developing sensitivity to distortion: Utility of psychological tests
in differentiating malingering and psychopathology in criminal defendants" by
Michaela Heinze. Journal of Forensic Psychiatry & Psychology, April,
2003, vol. 14, #1, pages 151-177.
Summary: Examined findings from 66 men hospitalized as incompetent to stand trial. Tests included the Minnesota Multiphasic Personality Inventory (MMPI-2), Structured interview of Reported Symptoms (SIRS), M Test, the Atypical Presentation Scale (AP), and the Rey 15-Item Memory Test (RMT). "Overall, results support the use of psychological testing in the detection of malingering of psychotic symptoms."
Auditory Verbal Learning Test (AVLT)
"Detecting poor effort and malingering with an expanded version of the Auditory Verbal Learning Test (AVLTX): Validation with clinical samples" by Joseph Barrash, Julie Suhr, & Kenneth Manzel. Journal of Clinical & Experimental Neuropsychology, February, 2004, vol. 26, #1, pages 124-140.
Summary: Used 3 studies to investigate a new procedure (AVLTX) to identify malingering or inadequate effort through addition of one-hour delayed recall/recognition trials. "The RMT showed excellent sensitivity and poor specificity; the DRT showed poor sensitivity and excellent specificity; the EI showed good sensitivity and excellent specificity. Adding a second delayed trial to list-learning tests can be a time-efficient procedure to detect inadequate effort."
"Exaggeration Index for an Expanded Version of the Auditory Verbal Learning
Test: Robustness to Coaching"
by J. Suhr, J. Gunstad, B. Greub, and J. Barrash.
Journal of Clinical & Experimental Neuropsychology, May, 2004, vol. 26, #3,
pages 416-427.
Summary: In 2 studies using independent samples,"
the EI-AVLTX was found to be relatively sensitive and specific to malingering,
and robust to the effects of a warning about malingering detection."
"Malingering, coaching, and the serial position effect" by Julie Suhr. Archives of Clinical Neuropsychology, January, 2002, pages 69-77.
Summary: Studied 4 groups taking the Auditory Verbal Learning Test: (a) those without injury asked to use normal effort, (b) those without injury asked to fake head injury but given no additional information, (c) those without injury asked to fake head injury, given information about head injuries, and warned about malingering detection, and (d) those with head injuries. "Results show that both malingering groups had lower scores on the primacy portion of the list during learning trials, while normals and head-injured patients had normal serial position curves. During delayed recall, normals and head-injured patients did better than the 2 malingering groups on middle and recency portions of the list. Findings suggest that the serial position effect during learning trials may be a useful pattern of performance to watch for when suspicious of malingering."
Basic Personality Inventory
"Effect of Symptom Information and Intelligence in Dissimulation: An Examination of Faking Response Styles by Inmates on the Basic Personality Inventory" by Jarrod Steffan, Daryl Kroner, & Robert Morgan. Assessment, March, 2007, Vol 14, #1, pages 22-34.
Summary: "his study employed the Basic Personality Inventory (BPI) to
differentiate various types of dissimulation, including malingered psychopathology
and faking good, by inmates. In particular, the role of intelligence in
utilizing symptom information to successfully malinger was examined....
.Unlike symptom information, intelligence evidenced some support for increasing
inmates' effectiveness in malingering, although there was no relationship
between higher intelligence and using symptom information to successfully
evade detection. Overall, the BPI was more effective in detecting malingered
psychopathology than faking good."
Benton Facial Recognition Test
"Classification accuracy of multiple visual spatial measures in the detection of suspect effort" by Whiteside, D., D. Wald, et al. The Clinical Neuropsychologist, 2011, 25(2), pages 287-301.
Summary: “The purpose of this study was to evaluate the classification accuracy of several commonly used visual spatial measures, including the Judgment of Line Orientation Test, the Benton Facial Recognition Test, the Hooper Visual Organization Test, and the Rey Complex Figure Test-Copy and Recognition trials. Participants included 491 consecutive referrals who participated in a comprehensive neuropsychological assessment and met study criteria.... The groups differed significantly on all measures. Additionally, receiver operating characteristic (ROC) analysis indicated all of the measures had acceptable classification accuracy, but a measure combining scores from all of the measures had excellent classification accuracy. Results indicated that various cut-off scores on the measures could be used depending on the context of the evaluation. Suggested cut-off scores for the measures had sensitivity levels of approximately 32-46%, when specificity was at least 87%. When combined, the measures suggested cut-off scores had sensitivity increase to 57% while maintaining the same level of specificity (87%).”
Benton Visual Form Discrimination "(VFD)
"Detection of Malingering Using Atypical Performance Patterns on Standard Neuropsychological Tests" by Glenn Larrabee. Clinical Neuropsychologist, August, 2003, vol. 17, #3, pages 410-425.
Summary: "Cut-off scores defining clinically atypical patterns of performance were identified for five standard neuropsychological and psychological tests: Benton Visual Form Discrimination (VFD), Finger tapping (FT), WAIS-R Reliable Digit Span (RDS), Wisconsin Card Sorting Failure-to-Maintain Set (FMS), and the Lees-Haley Fake Bad Scale (FBS) from the MMPI-2. . . . Combining the derivation and cross-validation samples yielded a sensitivity of 87.8%, specificity of 94.4%, and combined hit rate of 91.6%." In closing the discussion section, the author emphasizes that "assessment of effort in medicolegal settings must be multivariate. . . . As shown in the present investigation, requiring multiple indicators of poor effort lowers the chances of false positive identification errors in the assessment of malingering."
Booklet Category Test (see also Halstead-Reitan Neuropsychological Assessment
Battery)
"The Booklet Category Test and malingering in traumatic brain injury: Classification accuracy in known groups" by K. Greve, K. Bianchini, & T. Roberson. Clinical Neuropsychologist, March, 2007, vol. 21, #2, pages 318-337.
Summary: "A known-groups design was used to determine the classification accuracy of 12 Booklet Category Test variables in the detection of malingered neurocognitive dysfunction (MND) in traumatic brain injury (TBI). Participants were 206 TBI and 60 general clinical patients seen for neuropsychological evaluation. Slick, Sherman, and Iverson's (1999) criteria were used to classify the TBI patients into non-malingering, suspect, and MND groups. Classification accuracy of the BCT depended on the specific variable and injury severity examined, with some scores detecting more than 40% of malingerers with false positive error rates of 10% or less. However, the BCT variables are often influenced by cognitive ability as well as malingering, so caution is indicated in applying the BCT to the diagnosis of malingering."
California Verbal Learning Test (CVLT)
"California Verbal Learning Test Indicators of Malingered Neurocognitive Dysfunction: Sensitivity and Specificity in Traumatic Brain Injury" by Kelly Curtis, Kevin Greve, Kevin Bianchini, & Adrianne Brennan. Assessment, March, 2006, vol 13, #1, pages 46-61.
Summary: This study looked at 275 patients with traumatic brain injury and 352 general clinical patients who had been referred for neuropsychological assessment. "The TBI patients were assigned to one of five groups using the Slick, Sherman, and Iverson (1999) criteria: no incentive, incentive only, suspect, and malingering (both Probable MND and Definite MND).Within TBI, persons with the strongest evidence for malingering (Probable and Definite) had the most extreme scores. Good sensitivity (approximately 50%) in the context of excellent specificity (> 95%) was found in the TBI samples."
"Specificity of Malingering Detection Strategies in Older Adults
Using the CVLT and WCST" by Lee Ashendorf, Sid O'Bryant, & Robert
McCaffrey. Clinical Neuropsychologist, May, 2003, vol. 17, #2,
pages 255-262.
Summary: According to the article, the studies findings suggested that "The currently existing WCST formulas may have limited utility for the detection of malingering with older adults while the CVLT strategies do appear to have potential clinical utility."
Category Test
Please see Halstead-Reitan Neuropsychological Test Battery.
Cognitive Behavioral Driver's Inventory (CBDI)
"Use of the CBDI to detect malingering when malingerers do their 'homework'" by Jeffrey Borckardt, Eric Engum, Warren Lambert, Michael Nash, Odie Bracy, & Edward Ray. Archives of Clinical Neuropsychology, January, 2003, pages 57-69.
Summary: Gave college students financial incentives to try to feign brain damage on the Cognitive Behavioral Driver's Inventory (CBDI). Some were coached on how to malinger and some were not. "The coached and uncoached subjects performed indistinguishably on the CBDI. Both types of malingerers were discernable from real brain-damaged patients (99.2% accuracy area under the sensitivity-specificity curve). Further, CBDI profiles of 5 actual plaintiffs judged to be malingering were compared to CBDI profiles of experimental subjects. In each case, the malingering plaintiff's CBDI profile was indistinguishable from that of malingering experimental subjects and was clearly discernable from that of actual brain-damaged patients."
Computerized Assessment of Response Bias (CARB)
"Age related effects in children taking the Computerized Assessment
of Response Bias and Word Memory Test" by John Courtney, Juliet Dinkins,
Lyle Allen, & Katherine Kuroski, Katherine. Child Neuropsychology.
June, 2003, vol. 9, #2, pages109-116.
Summary: This study assessed the possible effects of age on childrens' performance on the Word Memory Test (WMT) and the Computerized Assessment of Response Bias (CARB). "Statistical analysis suggests that younger children (those under 10 years of age) tended to produce poorer performance on these instruments."
"Can malingering be identified with the Judgment of Line Orientation
Test?" by Grant Iverson. Applied Neuropsychology, September,
2001, pages 167-173.
Summary: The author reported that, "A large sample of 294 individuals involved in head injury litigation took the JLO and 2 tests designed to detect biased responding, the Computerized Assessment of Response Bias (CARB) and the Word Memory Test (WMT), as part of a comprehensive neuropsychological evaluation. Patients were divided into groups on the basis of brain injury severity and whether or not they scored in the suspicious range on the CARB or WMT. The patients who were identified as providing biased responding on the CARB or WMT also scored significantly lower on the JLO. However, the cutoff score correctly identified only 9.9% of this group, with a 1% possible false-positive rate. A different cutoff score was selected that had .22 sensitivity and .96 specificity. Overall, these results suggest that the JLO has limited utility as a screen for biased responding; however, clinicians are encouraged to evaluate these scores carefully if they do not seem to make biological or psychometric sense."
"Comparison of WMT, CARB, and TOMM failure rates in non-head injury
disability claimants" by Roger Gervais, Martin Rohling, Paul Green,
and Wendy Ford, Wendy. Archives of Clinical Neuropsychology, June, 2004,
vol. 19, #4, pages 475-487.
Summary: This study examined 519 claimants who were referred for disability or personal injury related assessments. They "were administered three SVTs, one based on digit recognition (Computerized Assessment of Response Bias, CARB), one using pictorial stimuli (Test of Memory Malingering, TOMM) and one employing verbal recognition memory (Word Memory Test, WMT). More than twice as many people failed the WMT than TOMM. CARB failure rates were intermediate between those on the other two tests. Thus, tests of recognition memory using digits, pictorial stimuli or verbal stimuli, all of which are objectively extremely easy tasks, resulted in widely different failure rates. This suggests that, while these tests may be highly specific, they vary substantially in their sensitivity to response bias."
"Computerized Assessment of Response Bias
in forensic neuropsychology" by Lyle Allen, Grant Iverson, & Paul Green.
Journal of Forensic Neuropsychology, 2002, 3, pages 205-225.
Summary: Describes the development and subsequent research for the CARB (Computerized Assessment of Response Bias) and concludes: "Patients with moderate or severe brain injury, or neurological disease, easily pass CARB. . . . CARB is sensitive to poor effort or suboptimal performance in patients with a wide variety of diagnoses, including outpatients with mild traumatic brain injuries, pain disorders, fibromyalgia, and depression."
"Detecting
neuropsychological malingering: Effects of coaching and information" by Thomas
Dunn, Paula Shear, Steven Howe, & Douglas Ris. Archives of Clinical
Neuropsychology,
March, 2003, pages 121-134.
Summary: This study found: "that the CARB-97 and WMT differentiate 'normal' from 'malingered' instructional sets, and show little difference between naive and coached malingering efforts. . . . [R]esponse times, in addition to items correct, may also be effective in detecting those who are not giving their full effort."
"Effects of coaching on symptom validity testing in chronic pain
patients presenting for disability assessments" by Roger Gervais,
Paul Green, Lyle Allen, & Grant Iverson. Journal of Forensic Neuropsychology,
vol. 2, #2, 2001, pages 1-19.
Summary: "A total of 118 chronic pain patients (mean age 47 yrs) seen for disability-related psychological evaluations were administered the Computerized Assessment of Response Bias (CARB) and the Word Memory Test (WMT). Failure rates of over 40% were observed on both tests. When subsequent patients were informed that the CARB is unaffected by pain or emotional distress, the CARB failure rate dropped to 6%. Coaching on the WMT, however, did not alter the failure rate on this test."
"Effects of injury severity and cognitive exaggeration on olfactory
deficits in head injury compensation claims" by Paul Green & Grant
Iverson. NeuroRehabilitation, vol. 16, #4, 2001, pages 237-243.
Summary: This study examined the "relationship between exaggeration and scores on an olfactory discrimination test in 448 patients being assessed in connection with a claim for financial benefits. Patients completed 2 tests designed to detect exaggerated cognitive deficits, the Computerized Assessment of Response Bias (CARB) and the Word Memory Test (WMT). The diagnostic groups included 322 head injury cases (average age 38.7 yrs), varying from very minor to very severe. Patients with more severe traumatic brain injuries were 10-12 times more likely to have olfactory deficits than persons with trivial to mild head injuries. In a subgroup of patients who failed either the CARB or WMT, there was no relationship between injury severity and total scores on the smell test. The dose-response relationship between brain injury severity and olfactory deficits is severely attenuated when patients who are probably exaggerating their cognitive deficits are included in the analyses. Patients with trivial to mild head injuries who demonstrated adequate effort on the CARB and WMT were no more likely to show olfactory deficits than 126 non-head-injured control Ss."
Computerized Dot Counting Test (CDCT)
"Intra-individual variability as an indicator of malingering in head injury" by Esther, Strauss, Daniel Slick, Judi Levy-Bencheton, Michael Hunter, Stuart MacDonald, & David Hultsch. Archives of Clinical Neuropsychology, July, 2002, pages 423-444.
Summary: Analog study of malingering using the Reliable Digit Span (RDS) task, the Victoria Symptom Validity Test (VSVT), and the Computerized Dot Counting Test (CDCT). Half the participants were asked to fake an injury convincingly and the other half were asked to take the tests honestly. Findings suggest that "regardless of an individual's experience, consideration of both level of performance (particularly on forced-choice symptom validity tasks) and intraindividual variability holds considerable promise for the detection of malingering."
Computerized Tests of Information Processing (CTIP)
"Detecting simulation of attention deficits using reaction time tests" by Janna Willison & Tom Tombaugh. Archives of Clinical Neuropsychology, January, 2006, vol. 21, #1, pages 41-52.
Summary: "The current study examined if a newly developed series of reaction time tests, the Computerized Tests of Information Processing (CTIP) ... were sensitive to simulation of attention deficits commonly caused by traumatic brain injury (TBI). The CTIP consists of three reaction time tests: Simple RT, Choice RT, and Semantic Search RT. These tests were administered to four groups: Control, Simulator, Mild TBI, and Severe TBI. Individuals attempting to simulate attention deficits produced longer reaction time scores, made more incorrect responses, and exhibited greater variability than cognitively-intact individuals and those with TBI. Sensitivity and specificity values were comparable or exceeded those obtained on the Test of Memory Malingering..."
Conditional Reasoning Tests (CRT)
"Measurement Issues Associated With Conditional Reasoning Tests: Indirect Measurement and Test Faking" by James M. LeBreton, Cheryl D. Barksdale, Jennifer Robin, & Lawrence R. James. Journal of Applied Psychology, January 2007 Vol. 92, No. 1, p1ages -16.
Summary: This article presents "3 studies examining 2 related
measurement issues associated with conditional reasoning tests (CRTs).
Study 1 examined the necessity of maintaining indirect assessment when
administering CRTs. Results indicated that, compared with a control condition,
2 experimental conditions that disclosed the purpose of assessment yielded
significant mean shifts on a CRT. Study 2 explored whether CRTs could be
faked when the purpose of assessment was not disclosed. Results indicated
that when indirect measurement was maintained, CRTs appeared to be resistant
to faking. Study 3 compared scores on the Conditional Reasoning Test for
Aggression across student, applicant, and incumbent samples. Results indicated
no significant mean differences among these samples." The article states
that these findings "answer
important methodological questions concerning CRTs. Moreover, these studies
extend the work of James (1998) by satisfying recommendations for research
exploring the necessity of indirect measurement, the ease with which keyed
items responses can be identified, and the extent to which CRTs are susceptible
to faking (James & Mazerolle,
2002). Additionally, our work represents an attempt to answer Snell et
al.'s (1999) call for more innovative measurement systems, in particular
those designed to circumvent problems associated with faking. Although
skeptics may argue that failing to divulge a test's full intent, even in
the spirit of reduced faking, is deceptive and unnecessary, Study 1 indicates
that when the purpose of assessment was disclosed, individuals were able
to identify the keyed item responses associated with rationalizing aggressive
behavior. With respect to the measurement of constructs such as aggression
(James, 1998), antisocial personality (Walton, 2004), and subclinical psychopathy
(Gustafson, 1999, 2000; LeBreton et al., 2006) we advance that the goal
of identifying potentially violent, dangerous, destructive, and antisocial
individuals outweighs the goal of being completely candid about the specific
form of reasoning being assessed."
Connor's Adult ADHD Rating Scale
"Detection of feigned ADHD in college students" by Sollman, M. J., J. D. Ranseen, et al. Psychological Assessment, 2010, 22(2), pages 325-335.
Summary: “The performance of 31 undergraduates financially motivated and coached about ADHD via Internet-derived information was compared to that of 29 ADHD undergraduates following medication washout and 14 students not endorsing symptomatology. Results indicated malingerers readily produced ADHD-consistent profiles. Symptom checklists, including the ADHD Rating Scale and Conners's Adult ADHD Rating Scale–Self-Rating Form: Long, were particularly susceptible to faking. Conners's Continuous Performance Test—II findings appeared more related to motivation than condition. Promising results were seen with all cognitive SVTs (Test of Memory Malingering [TOMM], Digit Memory Test, Letter Memory Test, and Nonverbal–Medical Symptom Validity Test), particularly TOMM Trial 1 when scored using Trial 2 criteria. All SVTs demonstrated very high specificity for the ADHD condition and moderate sensitivity to faking, which translated into high positive predictive values at rising base rates of feigning. Combining 2 or more failures resulted in only modest declines in sensitivity but robust specificity. Results point to the need for a thorough evaluation of history, cognitive and emotional functioning, and the consideration of exaggerated symptomatology in the diagnosis of ADHD.”
Connors' Continuous Performance Test-II
"Detection of feigned ADHD in college students" by Sollman, M. J., J. D. Ranseen, et al. Psychological Assessment, 2010, 22(2), pages 325-335.
Summary: “The performance of 31 undergraduates financially motivated and coached about ADHD via Internet-derived information was compared to that of 29 ADHD undergraduates following medication washout and 14 students not endorsing symptomatology. Results indicated malingerers readily produced ADHD-consistent profiles. Symptom checklists, including the ADHD Rating Scale and Conners's Adult ADHD Rating Scale–Self-Rating Form: Long, were particularly susceptible to faking. Conners's Continuous Performance Test—II findings appeared more related to motivation than condition. Promising results were seen with all cognitive SVTs (Test of Memory Malingering [TOMM], Digit Memory Test, Letter Memory Test, and Nonverbal–Medical Symptom Validity Test), particularly TOMM Trial 1 when scored using Trial 2 criteria. All SVTs demonstrated very high specificity for the ADHD condition and moderate sensitivity to faking, which translated into high positive predictive values at rising base rates of feigning. Combining 2 or more failures resulted in only modest declines in sensitivity but robust specificity. Results point to the need for a thorough evaluation of history, cognitive and emotional functioning, and the consideration of exaggerated symptomatology in the diagnosis of ADHD.”
"Detection of malingering in mild traumatic brain injury with the Conners' Continuous Performance Test-II" by Ord, J. S., A. C. Boettcher, et al. Journal of Clinical and Experimental Neuropsychology, 2011, 32(4), pages 380-387.
Summary: “Classification accuracy for the detection of malingered neurocognitive dysfunction (MND) in mild traumatic brain injury (TBI) is examined for two selected measures from the Conners' Continuous Performance Test-II (CPT-II) using criterion-groups validation... At cutoffs associated with at least 95% specificity in both mild and M/S TBI, sensitivity to MND in mild TBI was 30% for Omissions, 41% for Hit Reaction Time Standard Error, and 44% using both indicators. These results support the use of the CPT-II as a reliable indicator for the detection of malingering in TBI when used as part of a comprehensive diagnostic system.”
Criminal Offender Infrequency Scale (see MMPI-2)
Depression, Anxiety and Stress Scales (DASS-21)
"Does online psychological test administration facilitate faking?" by Grieve, R. and H. T. de Groot. Computers in Human Behavior, 2011, 27(6), pages 2386-2391.
Summary: “ As predicted, participants were able to fake good on the HEXACO-60 and to fake bad on the DASS-21. Also as predicted, there were no significant differences in faked scores as a function of test administration mode. Further, examination of effect sizes confirmed that the influence of test administration mode was small. It was concluded that online and pen-and paper presentation are largely equivalent when an individual is faking responses in psychological testing.”
Diagnostic
and Statistical Manual, Fourth Edition (DSM-IV)
"Beyond DSM-IV: A meta-review of the literature on malingering" by Allan Gerson. American Journal of Forensic Psychology, 2002, pages 57-69.
Summary: A review of 1,040 malingering studies in light of the DSM-IV (Diagnostic and Statistical Manual, Fourth Edition) definition. Concludes that "the DSM-IV is far too limited in its definition to be considered as a reliable method of detecting malingering and, by its language, may frequently lead to false positives."
Digit Memory Test
"Detection of malingering behavior at different levels of task difficulty in Hong Kong Chinese" by Vivienne Chiu & Tatia Lee. Rehabilitation Psychology, May, 2002, pages 194-203.
Used the Digit Memory Test to study whether level of task difficulty affects the identification of malingering. Concluded that "classification accuracy was higher at a higher level of difficulty.
"Using symptom validity tests to detect malingered ADHD in college students" by Jasinski, L. J., J. P. Harp, et al. The Clinical Neuropsychologist, 2011, 25(8), pages 1415-1428.
Summary: “Undergraduates with a history of diagnosed ADHD were randomly assigned either to respond honestly or exaggerate symptoms, and were compared to undergraduates with no history of ADHD or other psychiatric disorders who were also randomly assigned to respond honestly or feign symptoms of ADHD. Similar to Sollman et al. (2010) and other recent research on feigned ADHD, several symptom validity tests, including the Test of Memory Malingering (TOMM), Letter Memory Test (LMT), Digit Memory Test (DMT), Nonverbal Medical Symptom Validity Test (NV-MSVT), and the b Test were reasonably successful at discriminating feigned and genuine ADHD. When considered as a group, the criterion of failure of 2 or more of these SVTs had a sensitivity of. 475 and a specificity of 1.00.”
Digit Recognition Test
"Comparison of three tests to detect feigned amnesia: The effects of feedback and the measurement of response latency" by Barbara Bolan, Jonathan Foster, Ben Schmand, & Steve Bolan. Journal of Clinical & Experimental Neuropsychology, April, 2002, pages 154-167.
Summary: Reported 3 experiments assessing the effectiveness of an English version of the "Amsterdam Short Term Memory Test (ASTM test) developed to detect feigned memory impairment. . . . Using a simulation design, the ASTM test compared favorably with the Test of Memory Malingering (TOMM) and appeared better than a newly-devised Digit Recognition Test (DRT)."
Dot Counting Task (DCT)
"Effects of motivation, coaching, and knowledge of neuropsychology
on the simulated malingering of head injury" by Kristi Erdal. Archives
of Clinical Neuropsychology, January, 2004, vol. 19, #1, pages 73-88.
Summary: Investigated whether students could successfully take head injury on the Rey 15-Item Test (FIT) and the Dot Counting Test (DCT) by randomly assigning them to one of 3 motivation groups -- no motivation, compensation, & avoidance of blame for motor vehicle accident -- and one of 3 coaching conditions -- no coaching, coaching post-concussive symptoms, & coaching symptoms in addition to warning about malingering detection. The author concluded that "coaching interaction on the accuracy variables indicated that those in the compensation condition performed the most poorly, and that coaching plus warning only tempers malingering on memory tasks, not timed tasks."
Evaluation of Competency to Stand Trial--Revised (ECST-R)
"Evaluation of malingering screens with competency to stand Trial patients: A known-groups comparison" by M. Vitacco, R. Rogers, J. Gabel, & J. Munizza. Law and Human Behavior, June 2007, vol. 31, #3, pages 249-260.
Summary: "The current study assessed the effectiveness of three common screening measures: the Miller Forensic Assessment of Symptoms Test (M-FAST; Miller, 2001), the Structured Inventory of Malingered Symptomatology (SIMS; Widows & Smith, 2004), and the Evaluation of Competency to Stand Trial-Revised Atypical Presentation Scale (ECST-R ATP; Rogers, Tillbrook, & Sewell, 2004). Using the Structured Interview of Reported Symptoms (SIRS) as the external criterion, 100 patients involved in competency to stand trial evaluations were categorized as either probable malingerers (n = 21) or nonmalingerers (n = 79). Each malingering scale produced robust effect sizes in this known-groups comparison. Results are discussed in relation to the comprehensive assessment of malingering within a forensic context."
"An Examination of the ECST-R as a Screen for Feigned Incompetency to
Stand Trial" by Richard Rogers, Rebecca Jackson, Kenneth Sewell, & Kimberly
Harrison. Psychological Assessment, June, 2004, vol. 16, #2,
pages139-145.
Summary: This study found that "the ECST-R ATP scales appear to be homogenous scales with established clinical use as feigning screens in CST evaluations." The article concludes: "CST measures have traditionally neglected an integral component of competency evaluations, namely the substantial possibility of feigned incompetency. Current data on primary ATP scales are consistent with unidimensional scales that differentiate between feigned and genuine ECST-R protocols. Several cut scores are highly effective at screening forensic populations (jail detainees and forensic inpatients) for the possibility of feigned incompetency. The next step is testing the ATP cut scores as feigning screens with additional forensic populations. Finally, the current investigation explores whether the ATP-BI can be used in established cases of malingering to address whether the feigning encompasses competency to stand trial."
Fake Bad Scale
Please see MMPI-2
Forced Choice Tests
"Chance Guessing in a Forced-Choice Recognition Task and the Detection of Malingering" by Kenneth Flowers, Carol Bolton, & Nicola Brindle. Neuropsychology, March, 2008, vol. 22, #2, pages 273–277.
Summary: "Results from the chance group suggest that variation in guessing rates on a two-alternative FC test follows a normal distribution around the postulated 50% rate for both individual scores and test items. The range of variation is not large but overlaps both the malingering groups below the 50% point and normal performance at the lower end of the control group. As this latter sample is of normal young adults who are mostly performing toward the top of the scale, the degree of overlap represents the minimal likely to be found; in other groups in which recognition levels are lower, the overlap will be greater. Thus, where scores are high, this chance factor will not be important, but it may distort the scores of amnesic groups, which include a high proportion of guesses. There is no evidence, at least in this sample, that items whose guessing rate is high are more likely to be recognized by normal participants, that is, faces that might be chosen as more distinctive in some way (or even as more pleasant) are not more likely to be remembered. (Where scores are much lower, as in an amnesic group, there may be a greater correlation with guessing). Scores of the control group also show considerable variation, and even without an adjustment for guessing overlap the chance distribution. The real spread of recognition levels, however, does not show up on the standard unadjusted scale because on a one-in-two FC test, guessing can make up for half the number of any items missed from memory, and hence apparent scores decline at only half the rate of the underlying real memory level. The test, therefore, minimizes differences between levels of recognition and thus any deficits shown by amnesic patients."
General Reviews & Issues
"Advances and issues in the diagnostic differential of malingering versus brain injury" by Ernest Bordini, Manuel Chaknis, Rose Ekman-Turner, & Robert Perna. NeuroRehabilitation, 2002, pages 93-104.
Summary: A general review of developments in identifying malingered brain injury through clinical and neuropsychological assessment.
"Assessing Malingered Posttraumatic Stress Disorder: A Critical Review"
by Jennifer Guriel & William Fremouw. Clinical Psychology Review, December,
2003, vol. 23, #7, pages 881-904.
Summary: Review of the available literature leads the authors to conclude " that currently, there is no method or single instrument that is universally reco gnized as being the best tool to detect malingering in PTSD claimants."
"Assessment of malingering and exaggeration in patients involved in head
injury litigation." by Beiling Gao. Chinese Journal of Clinical Psychology,
2001, vol. 9, #3, pages 233-236.
Summary: Examines the difficulty of identifying malingering in this area (i.e., traumatic head injury litigation) and reviews the relevant research on such measures as as the Fake-Scale of the MMPI, Mittenberg's Malingering Index, Forced-Choice Tests, emphasizing neuropsychological approach.
"Assessment of response bias in mild head injury: Beyond malingering tests"
by
Scott Millis & Chris Volinsky. Journal of Clinical & Experimental
Neuropsychology, December, 2001, pages 809-828.
Summary: The authors write that the "evaluation of response bias and malingering in the cases of mild head injury should not rely on a single test. Initial injury severity, typical neuropsychological test performance patterns, preexisting emotional stress or chronic social difficulties, history of previous neurological or psychiatric disorder, other system injuries sustained in the accident, preinjury alcohol abuse, and a propensity to attribute benign cognitive and somatic symptoms to a brain injury must be considered along with performances on specific measures of response bias. This article reviews empirically-supported tests and indices."
"Base Rates of Malingering and Symptom
Exaggeration" by
Wiley Mittenberg, Christine Patton, Elizabeth Canyock, & Daniel Condit.
Journal of Clinical and Experimental Neuropsychology, December, 2002, pages
1094-1102.
Summary: Surveyed members of the American Board of Clinical Neuropsychology to obtain estimated base rates of probable malingering and symptom exaggeration. "Base rates did not differ among geographic regions or practice settings, but were related to the proportion of plaintiff vs defense referrals."
"Beyond DSM-IV: A meta-review of the literature on malingering" by
Allan Gerson. American Journal of Forensic Psychology, 2002, pages 57-69.
Summary: A review of 1,040 malingering studies in light of the DSM-IV (Diagnostic and Statistical Manual, Fourth Edition) definition. Concludes that "the DSM-IV is far too limited in its definition to be considered as a reliable method of detecting malingering and, by its language, may frequently lead to false positives."
"Chance Guessing in a Forced-Choice Recognition Task and the Detection
of Malingering" by Kenneth A. Flowers, Carol Bolton, & Nicola Brindle.
Neuropsychology, March, 2008, vol. 22, #2, pages 273–277.
Summary: "Results from the chance group suggest that variation in guessing rates on a two-alternative FC test follows a normal distribution around the postulated 50% rate for both individual scores and test items. The range of variation is not large but overlaps both the malingering groups below the 50% point and normal performance at the lower end of the control group.... The requirement for a significantly worse-than-chance score to detect malingering, however, means that only out-and-out malingerers will be caught by this method. What is perhaps a more common condition, namely patients who have a genuine impairment but want to exaggerate it, may paradoxically be more difficult to detect by this means because their genuine amnesia will show the normal compensation of guessing on items they do not know, which means that they will have a greater chance bonus factor in their performance despite themselves. Hence, they will not perform notably below the minimum chance level and thus their malingering will be less obvious."
"Clinical and conceptual problems in the attribution of malingering in forensic evaluations" by S. L. Drob, K. B. Meehan, & S. E. Waxman. Journal of the American Academy of Psychiatry & the Law, March, 2009, vol. 37, #1, pp. 98-106.
Summary: "The authors review clinical and conceptual errors that contribute to false attributions of malingering in forensic evaluations. Unlike the mental disorders, malingering is not defined by a set of (relatively) enduring symptoms or traits; rather, it is an intentional, externally motivated, and context-specific form of behavior. Despite this general knowledge, attributions of malingering are often made by using assessment tools that may detect feigning but cannot be relied upon to determine incentive and volition or consciousness (defining characteristics of malingering). In addition, forensic evaluators may overlook the possibility that feigning is a function of true pathology, as in Ganser syndrome or the factitious disorders, or that a seemingly malingered presentation is due to symptoms of an underlying disorder, such as dissociative identity disorder (DID). Other factors that set the stage for false positives, such as pressure on forensic specialists to identify malingering at all costs, failure to consider the base rate problem, and cultural variables, are also reviewed."
"Coaching Clients to Take Psychological
and Neuropsychological Tests" (pages 373-379) by Tara Victor & Norman
Abeles. Professional Psychology: Research & Practice, 2004, vol. 35, #4,
pages 373-379.
Summary: This article notes that "The results of a recent survey that
was mailed to members of the National
Academy of Neuropsychology (NAN) and the Association of Trial Lawyers
indicated that (a) 75% of attorneys said they spend an average of 25-60
min
preparing their clients by providing information about the tests they will
take and by suggesting how clients should respond, and (b) 44% of attorneys
who responded wanted to know what specific neuropsychological tests the
psychologist planned to administer, and most reported receiving this
information. Likewise, a widely cited survey found that 48% of attorneys
and 33% of law students in the sample believed that their clients should
be
informed about the nature of the psychological tests to be given in an
assessment, including information about malingering scales." The
article "reviews the empirical literature with respect to the
effects of
coaching on psychological tests, discusses current ethical and legal
standards relevant to coaching on psychological tests, and offers
suggestions on how the field of assessment psychology might deal with this
challenge."
"A critical analysis of the MND criteria for feigned cognitive impairment: Implications for forensic practice and research" by Rogers, R., S. D. Bender, et al. Psychological Injury and Law, 2011, 4(2), pages 147-156.
Summary: “The development of the Malingered Neurocognitive Dysfunction (MND) model has been highly influential for both feigning research and neuropsychological practice. In striving to be a comprehensive model of malingering, MND proposes complex criteria for ascertaining possible, probable, and definite levels. In its critical review, this article suggests the possibility of an MND bias towards the over-classification of malingering. It also examines the limits of MND research to adequately test the MND model.”
"Daubert, Cognitive Malingering, and Test Accuracy" by Douglas Mossman.
Law & Human Behavior, June, 2003, pages 229-249.
Examining data from the TOMM as an example, this article explores general issues of test accuracy in assessing malingering. In the conclusion, the author writes: "An examination of data on TOMM's performance shows that errors in test-based judgments about malingering have at least two sources. First, malingering tests themselves, though very accurate, are likely to be imperfect, because the test results of honest responders overlap with those of test takers who are feigning or exaggerating. Even when a test result justifies a great increase in one's suspicion of malingering, some small likelihood may persist that the evaluee who produced the result was responding honestly. Second, ambiguity or imprecision is inherent in estimates of the pretest probability of malingering and the accuracy indices that characterize the malingering test. As a result, an evaluator's belief about the posttest probability of malingering is best characterized as an interval that can be calculated from (and that therefore incorporates) mathematical formulations of imprecision in base rates and accuracy indices. The Daubert decision suggests that courts scrutinize 'the known or potential rate of error' of scientific techniques that form the bases of proffered expert opinions. . . . Malingering measures usually will have many possible scores, and therefore, many possible error rates. . . . If, in making a judgment about a malingering test's 'error rate,' a court is concerned about the likelihood that an expert's conclusion is wrong, then Bayes's Theorem says the calculated error rate stems in part from the expert's pretesting information and beliefs about whether an evaluee was malingering, and from the ambiguity that results from efforts to express such information and beliefs in probabilistic terms."
"Detection and Management of Malingering in a Clinical Setting" by B.
Adetunji, B. Basil, M. Matthews, A. Williams, T. Osinowo, & O. Olakunie.
Primary Psychiatry, 2006, vol. 13, #1, pages 61-69.
Summary: Reviews diverse approaches and instruments for assessing malingering. Notes that "While there are batteries of neuropsychologic instruments specific enough to make a diagnosis of malintering, clinicians must not forget that malingering could co-exist with genuine psychiatric illnesses."
"Determinations of malingering: Evolution from case-based methods to detection strategies" by Richard Rogers & Amor Correa. Psychiatry, Psychology, & Law, October, 2008, Vol 15(2), pages 213-223.
Summary: "This article briefly reviews detection strategies for two distinct domains: feigned cognitive impairment and feigned mental disorders. It examines general categories of detection strategies (i.e., unlikely and amplified) and clinical methods (e.g., multiscale inventories and structured interviews). Recommendations are presented for malingering screens and malingering determinations."
"Factors differentiating successful versus unsuccessful malingerers" by
John Edens, Laura Guy, Randy Otto, Jacqueline Buffington, Tara Tomicic,
& Norman Pothyress. Journal of Personality Assessment, October, 2001, pages
333-338.
Summary: Studied 540 participants directed to feign a specific mental
disorder on an array of self-report measures developed to identify malingering.
"Postexperiment questionnaires indicated that those who were able
to appear symptomatic while avoiding being detected as feigning (n = 60)
were more
likely to endorse a lower rate of legitimate symptoms, to avoid overly
unusual or bizarre items, and to base their responses on their own personal
experiences."
"Malingering in children: Fibs and faking" by J. Walker. Child and Adolescent Psychiatric Clinics of North America, 2011, 20(3), pages 547-556.
Summary: “Research has established that children can make efforts to deceive others and that malingering or underperformance in psychiatric and psychological evaluations is common.... Children who behave in a suspect fashion and children who have known motivations to present as more pathologic than they are should be formally assessed with psychological techniques to rule out the presence of malingering.”
"Malingering involving insurance fraud: When it pays to be ill" by Beach, S. R. and T. A. Stern. Psychosomatics: Journal of Consultation Liaison Psychiatry, 2011, 52(3), pages 280-282.
Summary: This article discusses case reports of people who appear to be malingering in order to obtain insurance benefits.
"Neural correlates of feigned memory impairment are distinguishable from answering randomly and answering incorrectly: An fMRI and behavioral study" by Liang, C.-Y., Z.-Y. Xu, et al. Brain and Cognition, 2012, 79(1), pages 70-77.
Summary: “Previous functional magnetic resonance imaging (fMRI) studies have identified activation in the prefrontal–parietal–sub-cortical circuit during feigned memory impairment when comparing with truthful telling. Here, we used fMRI to determine whether neural activity can differentiate between answering correctly, answering randomly, answering incorrectly, and feigned memory impairment. In this study, 12 healthy subjects underwent block-design fMRI while they performed digit task of forced-choice format under four conditions: answering correctly, answering randomly, answering incorrectly, and simulated feigned memory impairment. There were three main results. First, six areas, including the left prefrontal cortex, the left superior temporal lobe, the right postcentral gyrus, the right superior parietal cortex, the right superior occipital cortex, and the right putamen, were significantly modulated by condition type. Second, for some areas, including the right superior parietal cortex, the right postcentral gyrus, the right superior occipital cortex, and the right putamen, brain activity was significantly greater in feigned memory impairment than answering randomly. Third, for the areas including the left prefrontal cortex and the right putamen, brain activity was significantly greater in feigned memory impairment than answering incorrectly. In contrast, for the left superior temporal lobe, brain activity was significantly greater in answering incorrectly than feigned memory impairment. The results suggest that neural correlates of feigned memory impairment are distinguishable from answering randomly and answering incorrectly in healthy subjects.”
"Neuropsychological and psychological aspects of malingered post-traumatic stress disorder" by Demakis, G. J. and J. D. Elhai. Psychological Injury and Law, 2011, 4(1), pages 24-31.
Summary: “This article is divided into four sections. First, we address why individuals malinger PTSD as well as the challenges in detecting an invalid PTSD symptom presentation. Second, we discuss issues of cognitive functioning in PTSD and then the prevalence of and common patterns of poor effort on neuropsychological testing among individuals feigning PTSD. Third, we discuss psychological functioning in PTSD and then the prevalence and patterns of functioning on psychological measures of malingering in this population. Finally, recommendations for detecting invalid PTSD symptom presentations are provided.”
"Not just malingering: Syndrome diagnosis in traumatic brain injury litigation"
by Lawrence Miller. NeuroRehabilitation, 2001, vol. 16, #2, pages
109-122.
Summary: The author writes: "When patients present with syndromes we mistrust
or misunderstand, clinicians are often quick to make a determination of
malingering. However, the use of malingering as a default diagnosis neglects
a variety of clinical possibilities that may be relevant for treatment
and forensic disposition. In neuropsychology, the growing use of a malingering
diagnosis has recently been fueled by the increasingly adversarial nature
of forensic brain injury litigation in which the goal is often less to
provide an objective evaluation of cognition and personality as to brand
all personal injury claimants as manipulative frauds. Neuropsychologists
whose knowledge base and clinical experience involves mainly the administration
and scoring of psychometric tests may ignorantly, if innocently, overlook
alternative diagnoses and syndromes that their education and training have
ill-prepared them to recognize. This paper describes some of the syndromes
that may present in clinical and forensic practice with brain-injured patients."
"Recalled peritraumatic reactions, self-reported PTSD, and the impact of malingering and fantasy proneness in victims of interpersonal violence who have applied for state compensation" by Kunst, M., F. W. Winkel, et al. Journal of Interpersonal Violence, 2011, 26(11), pages 2186-2210.
Summary: “The present study explores the associations between three types of peritraumatic reactions (dissociation, distress, and tonic immobility) and posttraumatic stress disorder (PTSD) symptoms in a sample of 125 victims of interpersonal violence who had applied for compensation with the Dutch Victim Compensation Fund (DCVF). In addition, the confounding roles of malingering and fantasy proneness are examined. Results indicate that tonic immobility did not predict PTSD symptom levels when adjusting for other forms of peritraumatic reactions, whereas peritraumatic dissociation and distress did. However, after the effects of malingering and fantasy proneness had been controlled for, malingering is the only factor associated with increased PTSD symptomatology.”
"Trying to Beat the System: Misuse of the Internet to Assist in Avoiding
the Detection of Psychological Symptom Dissimulation" by Mark Ruiz,
Evan Drake, Aviva Glass, David Marcotte, & Wilfred van Gorp. Professional
Psychology: Research and Practice, June, 2002, pages 294-299.
Summary: In a search of web sites, this study found about 2-5% seemed to constitute "a direct threat to test security." These sites "contained the most damaging information regarding the procedures for modifying psychological test performance. Detailed information about psychological assessment instruments, along with explicit instruction on how to modify test performance, were found on many of these sites. Some of these sites also contained information about multiple instruments (e.g., MMPI-2 and Rorschach), thereby further facilitating an individual's ability to prepare for the upcoming evaluation. In each case, these sites provided specific information that could be used by a layperson to potentially change the outcome in an evaluation. . . . [M]any sites did have information that could be used to 'fake-bad," particularly with respect to cognitive impairment." The article concludes that "the results from the current probe indicate that a small number of Web sites contain information that could threaten the validity of psychological assessment instruments and evaluations."
"Update on neuropsychological assessment of malingering" by Kenneth Goldberg
& Eric Haas. Journal of Forensic Psychology Practice, 2001, pages 45-53.
Summary: The authors present a review of the methods for identifying malingering in traumatic brain injury litigation.
Glasgow Coma Scale
"Using the Wechsler Memory Scale-III to detect malingering in mild traumatic brain injury" by Ord, Jonathan S.; Greve, Kevin W.; & Bianchini, Kevin J. Clinical Neuropsychologist, July, 2008, vol. 22, #4, pages 689-704.
Summary: "This study examined the classification accuracy of the WMS-III primary indices in the detection of Malingered Neurocognitive Dysfunction (MND) in Traumatic Brain Injury (TBI) using a known-groups design. Sensitivity, specificity, and positive predictive power are presented for a range of index scores comparing mild TBI non-malingering (n = 34) and mild TBI malingering (n = 31) groups. A moderate/severe TBI non-malingering (n = 28) and general clinical group (n = 93) are presented to examine specificity in these samples. In mild TBI, sensitivities for the primary indices ranged from 26% to 68% at 97% specificity. Three systems used to combine all eight index scores were also examined and all achieved at least 58% sensitivity at 97% specificity in mild TBI. Specificity was generally lower in the moderate/severe TBI and clinical comparison groups. This study indicates that the WMS-III primary indices can accurately identify malingered neurocognitive dysfunction in mild TBI when used as part of a comprehensive classification system."
"Use of specific malingering measures in a Spanish sample" by Vilar-López, Raquel; Gómez-Río, Manuel; Caracuel-Romero, Alfonso; Llamas-Elvira, Jose; & Pérez-García, Miguel. Journal of Clinical and Experimental Neuropsychology, August, 2008, vol. 30, #6, pages 710-722.
Summary: "There are an increasing number of tests available for detecting malingering. However, these tests have not been validated for using in Spanish speakers. The purpose of this study is to explore the value of three specific malingering tests in the Spanish population. This study used a known-groups design, together with a group of analog students. The results show that both the Victoria Symptom Validity Test and the b Test can be used to detect malingering in Spanish population. However, some restrictions must be applied when the Rey 15-Item Test is administered and interpreted."
Gudjonsson Suggestibility Scales
"Detecting 'faking bad' on the Gudjonsson Suggestibility Scales by Julian Boon, Linsey Gozna, & Stephen Hall. Personality and Individual Differences, January, 2008, Vol 44, #1, pages 263-272.
Summary: "Little is known of the ability of interviewees to fake bad on the Gudjonsson Suggestibility Scales (GSS's) and this study sought to investigate the degree to which this could be achieved. Participants were randomly allocated to one of three groups--Standard Procedure, Test Aware and Faking Bad. Performance levels were compared both among groups and with established population norms. The findings support the view that the participants who were attempting to fake bad on the GSS were successful in doing so on the principal suggestibility measures of the test. However they also indicate that there may be potential in coding for additional information which can reveal 'red-flags' with which to unmask the interviewees attempting to fake-bad."
Halstead-Reitan Neuropsychological Test Battery
"Category test validity indicators: Overview
and practice recommendations" by Jerry Sweet & John King. Journal of
Forensic Neuropsychology, 2002, 3, pages 241-274.
Summary: Reviews studies that attempt to identify malingered brain injury. Concludes that: "In keeping with the universal recommendations for effort tests and validity indicators from reviews of the neuropsychological malingering literature in the last 12 years, CatT validity indicators should not be viewed in isolation. Rather, they should be considered primarily with regard to validity of CatT results and as one source of relevant information in the detection of insufficient effort and the ultimate complex judgment, among a subset of insufficient effort cases, that the cause is malingering."
"Criterion groups validation of the Seashore Rhythm Test and Speech Sounds Perception Test for the detection of malingering in traumatic brain injury" by Curtis, Greve, Brasseux, & Bianchini. Clinical Neuropsychologist, July, 2010, vol 24, #5, pages.
Summary: "A criterion-groups validation was used to determine the classification accuracy of the Seashore Rhythm Test (SRT) and Speech Sounds Perception Test (SSPT) in detecting malingered neurocognitive dysfunction (MND) in traumatic brain injury (TBI). TBI patients were classified into the following groups: (1) Mild TBI Not-MND (n = 24); (2) Mild TBI MND (n = 27); and (3) Moderate/Severe TBI Not-MND (n = 23). A sample of 90 general clinical patients was utilized for comparison. Results showed that both SRT correct and SSPT errors differentiated malingerers from non-malingerers in the Mild TBI sample. At 96% specificity, sensitivities were 37% for SRT correct and 59% for SSPT errors. Joint classification accuracy showed that the best accuracy was achieved when using a cut-off associated with a 4% false positive error rate in the Mild TBI sample. Specificity was considerably lower in the Moderate/Severe TBI and General Clinical groups."
"Criterion groups validation of the Seashore Rhythm Test and Speech Sounds Perception Test for the detection of malingering in traumatic brain injury" by Curtis, K. L., K. W. Greve, et al. The Clinical Neuropsychologist, 2010, 24(5), pages 882-897.
Summary: “A criterion-groups validation was used to determine the classification accuracy of the Seashore Rhythm Test (SRT) and Speech Sounds Perception Test (SSPT) in detecting malingered neurocognitive dysfunction (MND) in traumatic brain injury (TBI).... Results showed that both SRT correct and SSPT errors differentiated malingerers from non-malingerers in the Mild TBI sample. At 96% specificity, sensitivities were 37% for SRT correct and 59% for SSPT errors. Joint classification accuracy showed that the best accuracy was achieved when using a cut-off associated with a 4% false positive error rate in the Mild TBI sample. Specificity was considerably lower in the Moderate/Severe TBI and General Clinical groups.”
"Differentiating malingering from genuine cognitive dysfunction
using the Trail Making Test-ration and stroop interference scores" by J.
Egeland & T. Langfjæran. Applied Neuropsychology, 2007, vol.
14, #2, pages 113-119.
Summary: "In this study possible malingerers (n = 41), impaired (30) or cognitively normal (17) litigants were compared on the Trail Making Test B:A ratio score and Stroop Interference. The majority of possible malingerers had a low TMT-ratio (<2.5) and an inverted Stroop effect, whereas the majority of impaired subjects had a high TMT-ratio and specific Stroop interference. Sensitivity to malingering was 61 and 68 percent, and specificity was 57 and 59 percent. This is too low for valid classification of individuals. However, the combination of both measures increases predictability. The clinician is advised to look for other evidence of malingering in cases of simultaneous low TMT-ratio and inverted Stroop. Patients with high TMT-ratio and Stroop interference, should be thoroughly examined for indications of brain disease. "
"Evaluating effort with the Word Memory Test and Category Test--or
not: Inconsistencies in a compensation-seeking sample" by David Williamson,
Paul Green, Lyle Allen, & Martin Rohling. Journal of Forensic Neuropsychology,
2003, vol. 3, #3, pages 19-44.
Summary: This study "compared the groups identified by the Booklet Category Test (BCT) criteria published by Tenhula and Sweet (1996) and the effort-sensitive measures of the Word Memory Test (WMT; Green, Allen & Astner, 1996) in a large sample seeking compensation after suffering head injuries of varying levels of severity. Results revealed substantial differences between the groups identified by each technique as putting forth suboptirnal effort. The groups identified by the WMT scored in a manner similar to samples identified by other investigators exhibiting poor effort. In contrast, the classifications based upon the Category Test decision rules appear to be confounded by true neurocognitive impairment, particularly in individuals who have suffered more severe brain injuries. Caution is warranted in using the Category Test decision rules to identify poor effort in compensation- seeking samples."
"Detecting exaggeration and malingering with the Trail Making Test" by
Grant Iverson, Rael Lange, Paul Green, & Michael Franzen.
Clinical Neuropsychologist, August 2002, vol. 16, #3, pages 398-406.
Summary: "71 patients seen as part of a hospital trauma service who had acute traumatic brain injuries, and 228 patients involved in head injury litigation" participated in this study. "As expected, the hospital patients with more severe traumatic brain injuries performed more poorly than the patients with less severe brain injuries on Trails A and Trails B. Very high positive predictive values for individuals with very mild head injuries on Trails A and B were identified; lower positive predictive values were obtained for individuals with more severe head injuries. The negative predictive values were only moderate, and the sensitivity was very low for all groups...."
"Malingering indexes for the halstead category test"
by Teri Forrest, Daniel Allen, & Gerald Goldstein. Clinical Neuropsychologist,
May, 2004, vol. 18, #2, pages 334-347.
Summary: The 2 studies reported in this article found that a "discriminant analysis of the set of malingering indexes classified 20% of the student malingerers as brain damaged, and 3.4% of the patients with brain damage as malingerers. A stepwise analysis indicated that number of errors on subtests I and II and Total Errors were particularly sensitive to malingering."
"Trail
Making Test cut-offs for malingering among cocaine, heroin, and alcohol abusers" by
Charles Roberts & Arthur Horton. International Journal of Neuroscience, February,
2003, pages 223-231.
Summary: Developed cut-off scores (at the first, fifth, and tenth percentiles) on the Trail Making Test (TMT) for people diagnosed as alcoholic, cocaine abusers, and heroin abusers. The cut-off scores were developed "to alert clinicians to the increasingly higher probability of poor effort when a substance abuser in 1 of the 3 groups scores beyond the one percent cut-off or his or her sample of primary drug of abuse."
"Utility of the Trail Making Test in the assessment of malingering in
a sample of mild traumatic brain injury litigants" by Sid O'Bryant, Robin
Hilsabeck, Jerid Fisher, & Robert McCaffrey. Clinical Neuropsychologist,
February, 2003, pages 69-74.
Summary: A study of 94 traumatic brain injury litigants found that Trail Making Test "errors did not discriminate between suspected and nonsuspected malingerers; however, the overall level of performance on the TMT was suppressed in suspected malingerers . . . . [The findings] suggest using caution when interpreting TMT scores as markers of malingering in TBI litigants."
"What Tests Are Acceptable for Use in Forensic Evaluations? A Survey
of Experts" by Stephen Lally. Professional Psychology: Research & Practice,
October, 2003, vol. 34, #5, pages 491-498.
Surveyed diplomates in forensic psychology "regarding both the frequency
with which they use and their opinions about the acceptability of a variety
of psychological tests in 6 areas of forensic practice. The 6 areas were
mental state at the offense, risk for violence, risk for sexual violence,
competency to stand trial, competency to waive Miranda rights, and malingering." In
regard to the forensic assessment of malingering, "the majority of
the respondents rated as acceptable the Structured Interview of Reported
Symptoms (SIRS), Test of Memory Malingering, Validity Indicator Profile,
Rey Fifteen Item Visual Memory Test, MMPI-2, PAI, WAIS-III, and Halstead-Reitan.
The SIRS and the MMPI-2 were recommended by the majority. The psychologists
were divided between acceptable and unacceptable about using either version
of the MCMI (II or III). They were also divided, although between acceptable
and no opinion, for the WASI, KBIT, Luria-Nebraska, and Stanford-Binet-Revised.
The diplomates viewed as unacceptable for evaluating malingering the Rorschach,
16PF, projective drawings, sentence completion, and TAT. The majority gave
no opinion on the acceptability of the Malingering Probability Scale, M-Test,
Victoria Symptom Validity Test, and Portland Digit Recognition Test."
HEXACO-60
"Does online psychological test administration facilitate faking?" by Grieve, R. and H. T. de Groot. Computers in Human Behavior, 2011, 27(6), pages 2386-2391.
Summary: “ As predicted, participants were able to fake good on the HEXACO-60 and to fake bad on the DASS-21. Also as predicted, there were no significant differences in faked scores as a function of test administration mode. Further, examination of effect sizes confirmed that the influence of test administration mode was small. It was concluded that online and pen-and paper presentation are largely equivalent when an individual is faking responses in psychological testing.”
Hooper Visual Organization Test
"Classification accuracy of multiple visual spatial measures in the detection of suspect effort" by Whiteside, D., D. Wald, et al. The Clinical Neuropsychologist, 2011, 25(2), pages 287-301.
Summary: “The purpose of this study was to evaluate the classification accuracy of several commonly used visual spatial measures, including the Judgment of Line Orientation Test, the Benton Facial Recognition Test, the Hooper Visual Organization Test, and the Rey Complex Figure Test-Copy and Recognition trials. Participants included 491 consecutive referrals who participated in a comprehensive neuropsychological assessment and met study criteria.... The groups differed significantly on all measures. Additionally, receiver operating characteristic (ROC) analysis indicated all of the measures had acceptable classification accuracy, but a measure combining scores from all of the measures had excellent classification accuracy. Results indicated that various cut-off scores on the measures could be used depending on the context of the evaluation. Suggested cut-off scores for the measures had sensitivity levels of approximately 32-46%, when specificity was at least 87%. When combined, the measures suggested cut-off scores had sensitivity increase to 57% while maintaining the same level of specificity (87%).”
Immediate Post-Concussion Assessment and Cognitive Testing (ImPACT)
"'Sandbagging' baseline test performance on ImPACT, without detection, is more difficult than it appears." Schatz, P. and C. Glatts (2013). Archives of Clinical Neuropsychology 28(3): 236-244.
Summary: "The MSVT identified more participants in the naïve (80%) and coached (90%) groups than those automatically 'flagged' by ImPACT (60% and 75%, respectively). Inclusion of additional indicators within ImPACT increased identification to 95% of naïve and 100% of coached malingerers. These results suggest that intentional 'sandbagging' on baseline neurocognitive testing can be readily detected."
Integrated Visual & Auditory Continuous Performance Test (IVACPT)
"Detection of malingering in assessment of adult ADHD" by Colleen Quinn. Archives of Clinical Neuropsychology, May, 2003, pages 379-395.
Summary: Compared the ADHD Behavior Checklist and the Integrated Visual and Auditory Continuous Performance Test (IVACPT) in distinguishing among 3 groups of undergraduates: (a) those with ADHD, (b) those without ADHD attempting to feign ADHD, and (c) those without ADHD serving as control group. "Analyses indicated that the ADHD Behavior Rating Scale was successfully faked for childhood and current symptoms. IVA CPT could not be faked on 81% of its scales. The CPT's impairment index results revealed: sensitivity 94%, specificity 91%, PPP 88%, NPP 95%."
Judgment of Line Orientation Test (JLO)
"Can malingering be identified with the Judgment of Line Orientation Test?" by Grant Iverson. Applied Neuropsychology, September, 2001, pages 167-173.
Summary: The author reported that, "A large sample of 294 individuals involved in head injury litigation took the JLO and 2 tests designed to detect biased responding, the Computerized Assessment of Response Bias (CARB) and the Word Memory Test (WMT), as part of a comprehensive neuropsychological evaluation. Patients were divided into groups on the basis of brain injury severity and whether or not they scored in the suspicious range on the CARB or WMT. The patients who were identified as providing biased responding on the CARB or WMT also scored significantly lower on the JLO. However, the cutoff score correctly identified only 9.9% of this group, with a 1% possible false-positive rate. A different cutoff score was selected that had .22 sensitivity and .96 specificity. Overall, these results suggest that the JLO has limited utility as a screen for biased responding; however, clinicians are encouraged to evaluate these scores carefully if they do not seem to make biological or psychometric sense."
"Classification accuracy of multiple visual spatial measures in the detection of suspect effort" by Whiteside, D., D. Wald, et al. The Clinical Neuropsychologist, 2011, 25(2), pages 287-301.
Summary: “The purpose of this study was to evaluate the classification accuracy of several commonly used visual spatial measures, including the Judgment of Line Orientation Test, the Benton Facial Recognition Test, the Hooper Visual Organization Test, and the Rey Complex Figure Test-Copy and Recognition trials. Participants included 491 consecutive referrals who participated in a comprehensive neuropsychological assessment and met study criteria.... The groups differed significantly on all measures. Additionally, receiver operating characteristic (ROC) analysis indicated all of the measures had acceptable classification accuracy, but a measure combining scores from all of the measures had excellent classification accuracy. Results indicated that various cut-off scores on the measures could be used depending on the context of the evaluation. Suggested cut-off scores for the measures had sensitivity levels of approximately 32-46%, when specificity was at least 87%. When combined, the measures suggested cut-off scores had sensitivity increase to 57% while maintaining the same level of specificity (87%).”
Luria-Nebraska
"What Tests Are Acceptable for Use in Forensic Evaluations? A Survey of Experts" by Stephen Lally. Professional Psychology: Research & Practice, October, 2003, vol. 34, #5, pages 491-498.
Surveyed diplomates in forensic psychology "regarding both the frequency with which they use and their opinions about the acceptability of a variety of psychological tests in 6 areas of forensic practice. The 6 areas were mental state at the offense, risk for violence, risk for sexual violence, competency to stand trial, competency to waive Miranda rights, and malingering." In regard to the forensic assessment of malingering, "the majority of the respondents rated as acceptable the Structured Interview of Reported Symptoms (SIRS), Test of Memory Malingering, Validity Indicator Profile, Rey Fifteen Item Visual Memory Test, MMPI-2, PAI, WAIS-III, and Halstead-Reitan. The SIRS and the MMPI-2 were recommended by the majority. The psychologists were divided between acceptable and unacceptable about using either version of the MCMI (II or III). They were also divided, although between acceptable and no opinion, for the WASI, KBIT, Luria-Nebraska, and Stanford-Binet-Revised. The diplomates viewed as unacceptable for evaluating malingering the Rorschach, 16PF, projective drawings, sentence completion, and TAT. The majority gave no opinion on the acceptability of the Malingering Probability Scale, M-Test, Victoria Symptom Validity Test, and Portland Digit Recognition Test."
M Test
"Developing sensitivity to distortion: Utility of psychological tests in differentiating malingering and psychopathology in criminal defendants" by Michaela Heinze. Journal of Forensic Psychiatry & Psychology, April, 2003, vol. 14, #1, pages 151-177.
Examined findings from 66 men hospitalized as incompetent to stand trial. Tests included the Minnesota Multiphasic Personality Inventory (MMPI-2), Structured interview of Reported Symptoms (SIRS), M Test, the Atypical Presentation Scale (AP), and the Rey 15-Item Memory Test (RMT). "Overall, results support the use of psychological testing in the detection of malingering of psychotic symptoms."
"What Tests Are Acceptable for Use in Forensic Evaluations? A Survey
of Experts" by Stephen Lally. Professional Psychology: Research & Practice,
October, 2003, vol. 34, #5, pages 491-498.
Surveyed diplomates in forensic psychology "regarding both the frequency with which they use and their opinions about the acceptability of a variety of psychological tests in 6 areas of forensic practice. The 6 areas were mental state at the offense, risk for violence, risk for sexual violence, competency to stand trial, competency to waive Miranda rights, and malingering." In regard to the forensic assessment of malingering, "the majority of the respondents rated as acceptable the Structured Interview of Reported Symptoms (SIRS), Test of Memory Malingering, Validity Indicator Profile, Rey Fifteen Item Visual Memory Test, MMPI-2, PAI, WAIS-III, and Halstead-Reitan. The SIRS and the MMPI-2 were recommended by the majority. The psychologists were divided between acceptable and unacceptable about using either version of the MCMI (II or III). They were also divided, although between acceptable and no opinion, for the WASI, KBIT, Luria-Nebraska, and Stanford-Binet-Revised. The diplomates viewed as unacceptable for evaluating malingering the Rorschach, 16PF, projective drawings, sentence completion, and TAT. The majority gave no opinion on the acceptability of the Malingering Probability Scale, M-Test, Victoria Symptom Validity Test, and Portland Digit Recognition Test."
Malingering Probability Scale
"What Tests Are Acceptable for Use in Forensic Evaluations? A Survey of Experts" by Stephen Lally. Professional Psychology: Research & Practice, October, 2003, vol. 34, #5, pages 491-498.
Surveyed diplomates in forensic psychology "regarding both the frequency
with which they use and their opinions about the acceptability of a variety
of psychological tests in 6 areas of forensic practice. The 6 areas were
mental state at the offense, risk for violence, risk for sexual violence,
competency to stand trial, competency to waive Miranda rights, and malingering." In
regard to the forensic assessment of malingering, "the majority of
the respondents rated as acceptable the Structured Interview of Reported
Symptoms (SIRS), Test of Memory Malingering, Validity Indicator Profile,
Rey Fifteen Item Visual Memory Test, MMPI-2, PAI, WAIS-III, and Halstead-Reitan.
The SIRS and the MMPI-2 were recommended by the majority. The psychologists
were divided between acceptable and unacceptable about using either version
of the MCMI (II or III). They were also divided, although between acceptable
and no opinion, for the WASI, KBIT, Luria-Nebraska, and Stanford-Binet-Revised.
The diplomates viewed as unacceptable for evaluating malingering the Rorschach,
16PF, projective drawings, sentence completion, and TAT. The majority gave
no opinion on the acceptability of the Malingering Probability Scale, M-Test,
Victoria Symptom Validity Test, and Portland Digit Recognition Test."
McGill Pain Questionnaire (MPQ)
"Exaggerated Pain Report in Litigants with Malingered Neurocognitive Dysfunction" by Glenn Larrabee. Clinical Neuropsychologist, August, 2003, vol. 17, #3, pages 395-401.
Summary: This study of 29 litigants found that the Modified Somatic Perception Questionnaire (MSPQ) was better than the McGill Pain Questionnaire (MPQ) or the Pain Disability Index (PDI) for detecting exaggerated pain symptoms but cautioned that "significant elevations on the MPQ, PDI, and MSPQ are supportive, but not independently diagnostic of the symptom exaggeration characteristic of malingering."
Medical Symptom Validity Test (MSVT)
"Analysis of the dementia profile on the Medical Symptom Validity Test" by Bradley Axelrod & Christian Schutte. Clinical Neuropsychologist, July, 2010, vol 24, #5, pages 873-881.
Summary: "The Medical Symptom Validity Test (MSVT) was administered as part of a neuropsychological battery to a mixed clinical sample.... Of the 47% of the sample who failed in the easy subtests, 48% were considered to have the 'dementia profile.' The remaining 52% of individuals failing the easy subtests were considered by the task to have 'poor effort.' Comparing the neuropsychological test performance among these three groups (Pass, Dementia Profile, Poor Effort) found that on most tasks those individuals passing the easy subtests of the MSVT perform significantly better than the other two groups, which did not differ from each other. Individuals meeting criteria for the Dementia Profile performed worse on tasks of motor functioning and list learning in comparison to the Poor Effort group. The results suggest that the algorithm creating a Dementia Profile does not effectively differentiate groups of individuals who fail the easy subtests of the MSVT."
"The base rate of suboptimal effort in a pediatric mild TBI sample: Performance on the Medical Symptom Validity Test" by Michael Kirkwood & John Kirk. Clinical Neuropsychologist, vol 24, #5, pages 860-872.
Summary: " Performance on the Medical Symptom Validity Test (MSVT) was examined in 193 consecutively referred patients aged 8 through 17 years who had sustained a mild traumatic brain injury. A total of 33 participants failed to meet actuarial criteria for valid effort on the MSVT. After accounting for possible false positives and false negatives, the base rate of suboptimal effort in this clinical sample was 17%. Only one MSVT failure was thought to be influenced by litigation. The present results suggest that a sizable minority of children is capable of putting forth suboptimal effort during neuropsychological exam, even when external incentives are not readily apparent."
"Predicting test of memory malingering and medical symptom validity test failure within a Veterans Affairs Medical Center: use of the Response Bias Scale [of the MMPI-2] and the Henry-Heilbronner Index." Arch Whitney, K. A. (2013). Clin Neuropsychol 28(3): 222-235.
Summary: "The ability of the Response Bias Scale (RBS) and the Henry-Heilbronner Index (HHI), along with several other MMPI-2 validity scales, to predict performance on two separate stand-alone symptom validity tests, the Test of Memory Malingering (TOMM) and the Medical Symptom Validity Test (MSVT), was examined. Findings from this retrospective data analysis of outpatients seen within a Veterans Affairs medical center (N = 194) showed that group differences between those passing and failing the TOMM were largest for the RBS (d = 0.79), HHI (d = 0.75), and Infrequency (F; d = 0.72). The largest group differences for those passing versus failing the MSVT were greatest on the HHI (d = 0.83), RBS (d = 0.80), and F (d = 0.78). Regression analyses showed that the RBS accounted for the most variance in TOMM scores (20%), whereas the HHI accounted for the most variance in MSVT scores (26%). Nonetheless, due to unacceptably low positive and negative predictive values, caution is warranted in using either one of these indices in isolation to predict performance invalidity."
"'Sandbagging' baseline test performance on ImPACT, without detection, is more difficult than it appears." Schatz, P. and C. Glatts (2013). Archives of Clinical Neuropsychology 28(3): 236-244.
Summary: "The MSVT identified more participants in the naïve (80%) and coached (90%) groups than those automatically 'flagged' by ImPACT (60% and 75%, respectively). Inclusion of additional indicators within ImPACT increased identification to 95% of naïve and 100% of coached malingerers. These results suggest that intentional 'sandbagging' on baseline neurocognitive testing can be readily detected."
"WAIS-IV digit Span variables: Are they valuable for use in predicting TOMM and MSVT failure?" Whitney, K. A., et al. (2013). Applied Neuropsychology: Adult 20(2): 83-94.
Summary: "Findings from this retrospective analysis showed that, regardless of whether the TOMM or the MSVT was used as the negative response bias criterion, of all the DS variables examined, DS Sequencing Total showed the best classification accuracy. Yet, due to its relatively low positive and negative predictive power, DS Sequencing Total is not recommended for use in isolation to identify negative response bias."
Megargee's Criminal Offender Infrequency Scale (see MMPI-2)
Morel Emotional Numbing Test-Revised (MENT-R)
"Detecting malingered posttraumatic stress disorder using the Morel Emotional Numbing Test-Revised (MENT-R) and the Miller Forensic Assessment of Symptoms Test (M-FAST)" by J. M. Messer & W. J. Fremouw. Journal of Forensic Psychology Practice, 2007, vol 7, #3, pages 33-57.
Summary: "Total scores on the MENT-R distinguished among the four groups of participants. The three groups responding honestly averaged fewer than 3.5 errors, while malingerers missed over 5 times that number. Scores on the M-FAST were also higher for the group of participants malingering. Although the MENT-R and M-FAST correctly identified 63 and 78% of coached malingerers, respectively, the combined use of both measures resulted in the correct classification of over 90% of the participants instructed to malinger PTSD."
"Development of a validity scale for combat-related posttraumatic stress disorder: Evidence from simulated malingerers and actual disability claimants" by Kenneth R. Morel. Journal of Forensic Psychiatry & Psychology, 2008, vol. 19, #1, pages 52-63.
Summary: "Individuals being evaluated for posttraumatic stress disorder (PTSD) in disability compensation cases or forensic settings are at increased risk of response bias, making the legitimacy of face-valid self-report measures assessing PTSD in these settings questionable. The following two studies evaluate the Quick Test for PTSD (Q-PTSD) as a time-efficient method of detecting response bias in individuals being assessed for combat-related PTSD. In the first study, 78 participants were randomly assigned to either an experimental group (simulated malingerers) or a control group (genuine reporting) and were administered the Q-PTSD along with a standard measure of combat-related PTSD. The Q-PTSD demonstrated suitable internal consistency and construct validity. Post-hoc analyses revealed that the best cutoff score for the Q-PTSD resulted in values =.91 for sensitivity, specificity, positive predictive value, and negative predictive value in this sample. Utilizing the established cutoff, the second study evaluated the criterion-related validity of the Q-PTSD by assessing its correlation with the Morel Emotional Numbing Test for PTSD (MENT) in 67 military veterans applying for disability pensions and claiming combat-related PTSD."
Miller Forensic Assessment of Symptoms Test (M-FAST)
"Evaluation of malingering screens with competency to stand Trial patients: A known-groups comparison" by M. Vitacco, R. Rogers, J. Gabel, & J. Munizza. Law and Human Behavior, June 2007, vol. 31, #3, pages 249-260.
Summary: "The current study assessed the effectiveness of three common screening measures: the Miller Forensic Assessment of Symptoms Test (M-FAST; Miller, 2001), the Structured Inventory of Malingered Symptomatology (SIMS; Widows & Smith, 2004), and the Evaluation of Competency to Stand Trial-Revised Atypical Presentation Scale (ECST-R ATP; Rogers, Tillbrook, & Sewell, 2004). Using the Structured Interview of Reported Symptoms (SIRS) as the external criterion, 100 patients involved in competency to stand trial evaluations were categorized as either probable malingerers (n = 21) or nonmalingerers (n = 79). Each malingering scale produced robust effect sizes in this known-groups comparison. Results are discussed in relation to the comprehensive assessment of malingering within a forensic context."
"Impact of Coaching on Malingered Posttraumatic Stress Symptoms on the
M-FAST and the TSI" by Jennifer Guriel, Tami Yañez, William
Fremouw, Andrea Shreve-Neiger, Lisa Ware, Holly Filcheck, & Chastity
Farr. Journal
of Forensic Psychology Practice, 2004, vol. 4, #2, pages 37-56.
Summary: This study of the responses of 68 undergraduate psychology majors found: "Unlike previous research, those who were provided with symptoms and/or symptoms and strategies were found to be no more successful at malingering PTSD than were those who were not provided with this information. While only two-thirds of the simulators were detected as malingering using the M-FAST total score or TSI validity scales, nearly 90% were identified when these measures were utilized together."
"Examining the Use of the M-FAST With Criminal Defendants Incompetent
to Stand Trial" by Holly Miller, Holly.
International Journal of Offender Therapy & Comparative Criminology, June,
2004, vol. 48, #3, pages 268-280.
Summary: In this study of 50 criminal defendants found incompetent to stand trial because of a mental illness, "the M-FAST total score and items were compared with the Structured Interview of Reported Symptoms (SIRS) and the fake-bad indicators of the Minnesota Multiphasic Personality Inventory-2 (MMPI-2). Results indicated good evidence of construct and criterion validity, demonstrated by t tests, receiver operating characteristics analysis, and high correlations between the M-FAST, SIRS, and the fake-bad indices on the MMPI-2. Tentative cut scores for the M-FAST total score and scales were examined and demonstrated high utility with the sample of criminal defendants incompetent to stand trial."
"Forensic Applications of the Miller Forensic Assessment of Symptoms
Test (MFAST): Screening for Feigned Disorders in Competency to Stand Trial
Evaluations" by Rebecca Jackson, Richard Rogers, & Kenneth Sewell.
Law & Human Behavior, April, 2005, vol. 29, #2, pages 199-210.
Summary: This study tested the MFAST using a simulation design on "jail
and competency-restoration samples. Most notably, recommended MFAST cut
score (≥6) was useful
for the identification of feigning cases in competency evaluations."
"Malingering and PTSD: detecting malingering and war related PTSD by Miller Forensic Assessment of Symptoms Test (M-FAST)." Ahmadi, K., et al. (2013). BMC Psychiatry 13: 154.
Summary: "M-FAST showed a significant difference between war-related PTSD and malingering participants. The >/=6 score cutoff was suggested by M-FAST to detect malingering of war-related PTSD."
"Screening for feigning in a civil forensic setting" by Y. R.
Alwes, J. A. Clark, D. T. R. Berry, & R. P. Granacher. Journal of Clinical
and Experimental Neuropsychology, February, 2008, vol 30, #2, pages 1-8.
Summary: "This study compared the effectiveness of the Structured Inventory of Malingered Symptoms (SIMS; Widows & Smith, 2005) and the Miller Forensic Assessment of Symptoms Test (M-FAST; Miller, 2001) at screening for feigned psychiatric and neurocognitive symptoms in 308 individuals undergoing neuropsychiatric evaluation for workers' compensation or personal injury claims. Evaluees were assigned to probable feigning or honest groups based on results from well-validated, independent procedures. Both tests showed statistically significant discrimination between probable feigning and honest groups. Additionally, both the M-FAST and SIMS had high sensitivity and negative predictive power when discriminating probable psychiatric feigning versus honest groups, suggesting effectiveness in screening for this condition. However, neither of the procedures was as effective when applied to probable neurocognitive feigners versus honest groups, suggesting caution in their use for this purpose."
"Screening for Malingered Psychopathology in a Correctional Setting: Utility
of the Miller-Forensic Assessment of Symptoms Test (M-FAST)" by Laura Guy
& Holly Miller.
Criminal Justice & Behavior, December, 2004, vol. 31, #6, pages
695-716.
Summary: In this research on 50 incarcerated males, "consistent with previous M-FAST validity research, utility results indicated accurate classification was best achieved with an M-FAST total cutoff score of 6 (positive predictive power = .78, negative predictive power = .89). Utility analyses across race produced almost identical results indicating preliminary generalizability of the M-FAST for African American, Hispanic, and Caucasian inmates."
Millon Clinical Multiaxial Inventory - Third Edition (MCMI-III)
"Ability of the Millon Clinical Multiaxial Inventory - Third Edition To Detect Malingering" by Mike Schoenberg, Darwin Dorr, & Don Morgan. Psychological Assessment, June, 2003, pages 198-204.
Summary: The authors reported, "Despite widespread use, there are no data documenting the ability of the MCMI-III to identify malingering by nonsymptomatic individuals. Using a simulation design, we evaluated the ability of the MCMI-III to differentiate undergraduate students instructed to malinger the presence of severe psychopathology from bona fide psychiatric inpatients. The operating characteristics of the recommended cutoff score (Scale X raw ; 178) were investigated, and optimal cutoff scores that best distinguished student malingerers from psychiatric inpatients were computed for each of the MCMI-III modifier indices. . . . The disappointingly low PPP observed for the MCMI-III modifier indices is cause for concern and suggests that the MCMI-III is minimally sensitive to malingering. The results indicated that the MCMI-III Scale X raw cutoff score recommended by Millon et al. (1994) did not provide a useful index of purposeful deception, and optimal cutoff scores for Scales X and Y increased the PPP rate to slightly better than chance levels only when the base rate of malingering was high (36.9%). We tentatively recommend a cutoff score of Scale X BR; 89, but this index only suggests dissimulation may be present, and clinicians are encouraged to use alternative tests that are more sensitive (MMPI-2; Butcher et al., 1989 ). When these data are combined with Daubert and Metzler's (2000) findings, the MCMI-III appears to be less effective than the MMPI-2 at discriminating nonpatient malingerers from psychiatric inpatients (e.g., Graham, 2000; Graham et al., 1991; Greene, 2000; Rogers, Sewell, Salekin, 1994)."
"Classification accuracy of the Millon Clinical Multiaxial Inventory–III modifier indices in the detection of malingering in traumatic brain injury" by Aguerrevere, L. E., K. W. Greve, et al. Journal of Clinical and Experimental Neuropsychology, 2011, 33(5), pages 497-504.
Summary: “The present study used criterion groups validation to determine the ability of the Millon Clinical Multiaxial Inventory–III (MCMI–III) modifier indices to detect malingering in traumatic brain injury (TBI). Patients with TBI who met criteria for malingered neurocognitive dysfunction (MND) were compared to those who showed no indications of malingering.... At scores associated with a 4% false-positive (FP) error rate, sensitivity was 47% for Disclosure, 51% for Desirability, and 55% for Debasement. Examination of joint classification analysis demonstrated 54% sensitivity at cutoffs associated with 0% FP error rate. Results suggested that scores from all MCMI–III modifier indices are useful for identifying intentional symptom exaggeration in TBI. Debasement was the most sensitive of the three indices.”
"Distinguishing between neuropsychological malingering and exaggerated psychiatric symptoms in a neuropsychological setting" by Ruocco, Anthony C.; Swirsky-Sacchetti, Thomas; Chute, Douglas L.; Mandel, Steven; Platek, Steven M.; &Zillmer, Eric A. Clinical Neuropsychologist, May, 2008, vol. 22, #3, pages 547-564.
Summary: "It is unclear whether symptom validity test (SVT) failure in neuropsychological and psychiatric domains overlaps. Records of 105 patients referred for neuropsychological evaluation, who completed the Test of Memory Malingering (TOMM), Reliable Digit Span (RDS), and Millon Clinical Multiaxial Inventory-III (MCMI-III), were examined. TOMM and RDS scores were uncorrelated with MCMI-III symptom validity indices and factor analysis revealed two distinct factors for neuropsychological and psychiatric SVTs. Only 3.5% of the sample failed SVTs in both domains, 22.6% solely failed the neuropsychological SVT, and 6.1% solely failed the psychiatric SVT. The results support a dissociation between neuropsychological malingering and exaggeration of psychiatric symptoms in a neuropsychological setting."
"What Tests Are Acceptable for Use in Forensic Evaluations? A Survey
of Experts" by Stephen Lally. Professional Psychology: Research & Practice,
October, 2003, vol. 34, #5, pages 491-498.
Surveyed diplomates in forensic psychology "regarding both the frequency with which they use and their opinions about the acceptability of a variety of psychological tests in 6 areas of forensic practice. The 6 areas were mental state at the offense, risk for violence, risk for sexual violence, competency to stand trial, competency to waive Miranda rights, and malingering." In regard to the forensic assessment of malingering, "the majority of the respondents rated as acceptable the Structured Interview of Reported Symptoms (SIRS), Test of Memory Malingering, Validity Indicator Profile, Rey Fifteen Item Visual Memory Test, MMPI-2, PAI, WAIS-III, and Halstead-Reitan. The SIRS and the MMPI-2 were recommended by the majority. The psychologists were divided between acceptable and unacceptable about using either version of the MCMI (II or III). They were also divided, although between acceptable and no opinion, for the WASI, KBIT, Luria-Nebraska, and Stanford-Binet-Revised. The diplomates viewed as unacceptable for evaluating malingering the Rorschach, 16PF, projective drawings, sentence completion, and TAT. The majority gave no opinion on the acceptability of the Malingering Probability Scale, M-Test, Victoria Symptom Validity Test, and Portland Digit Recognition Test."
Minnesota Multiphasic Personality Inventory - Adolescent (MMPI-A)
"Identifying faking bad on the Minnesota Multiphasic Personality Inventory-Adolescent with Mexican adolescents" by Emilia Lucio, Consuelo Duran, John Graham, & Yossef Ben-Porath. Assessment, March, 2002, pages 62-69.
Summary: Study examined the MMPI-A's ability to differentiate among "nonclinical adolescents instructed to fake bad and both clinical and nonclinical adolescents who received standard instructions. . . . The F, Fl, and F2 Scales and the F-K index discriminated adequately between the 3 different groups. Results were similar to those previously reported for adults and adolescents in Mexico and the US. High positive and negative predictive powers and overall hit rates were obtained in this study."
Please follow this link to Part 2 of the Malingering Research Update
Please follow this link to go to Part 3 of the Malingering Research Update