Recent research on the tests of cognitive validity have found that such tests are

Introduction

Several meta-analyses have documented that self-reported test anxiety correlates negatively with test performance (Ackerman and Heggestad, 1997, Hembree, 1988, Seipp, 1991). In addition to work conducted in educational settings, negative correlations between test anxiety and test performance have also been shown in real and simulated employee selection contexts (Fletcher et al., 1997, McCarthy and Goffin, 2005, Schmit and Ryan, 1992). Professionals who use aptitude tests in research and practice have the ethical responsibility to ensure that all test-takers have an equal opportunity to demonstrate their abilities. Thus, the persistent finding of a negative relation between test anxiety and test performance understandably creates concern that test anxiety might result in biased or inaccurate predictions (Haladyna and Downing, 2004, Zeidner, 2007). Indeed, some have argued that “the IQs, aptitudes, and progress of test-anxious students are consistently misinterpreted” and consequently “the validity of the entire testing process is challenged” (Hembree, 1988, p. 75).

However, that test anxiety is negatively associated with test scores does not by itself indicate bias (Reeve and Bonaccio, 2008, Sarason, 1972). Indeed, there are multiple perspectives concerning the nature of the relation between test anxiety and test performance (e.g., Reeve et al., 2009, Tobias, 1985, Wicherts and Zand Scholten, 2010, Wine, 1971; see Zeidner, 1998, chap. 3 for a review). In contrast to those who posit anxiety as a direct cause of poor test performance, Eysenck and Eysenck (1985) warned, “it must not be forgotten that we are dealing with correlational evidence, that is, those people who report worried and self-evaluative thoughts tend to be the same people who exhibit poor levels of performance. Even if it turns out that there is a causal relationship … it may well be that the direction of causation is, in fact, the opposite of that usually envisaged” (p. 294).

Because understanding whether test anxiety influences the validity of aptitude tests is a necessary requirement for their ethical use, several researchers (e.g., Haladyna and Downing, 2004, Reeve et al., 2009, Schmitt, 2002) have called for more work directly examining predictive bias. Although prior work has examined whether anxiety creates measurement bias (e.g., Reeve & Bonaccio, 2008), little research examined whether and to what extent anxiety can lead to predictive bias. It is particularly important to study test anxiety in relation to predictive bias given the prevalent use of aptitude tests in high stakes contexts, most notably to make education and employment decisions. Thus, our purpose is to investigate whether test anxiety can induce differential predictive validity when using cognitive ability tests (CATs) to predict academic outcomes.

Test anxiety refers to the “phenomenological, physiological and behavioral responses” (Zeidner, 2007, p. 166) that accompany testing. It is a subjective emotional state experienced before or during a specific evaluation relating to the act of completing the evaluation itself, the threat of failing, and perceived negative consequences. Thus, test anxiety is conceptualized as the manifestation of a situation-specific trait (Zeidner, 1998). Modern views of test anxiety conceptualize it as having two major components: Worry and Emotionality (Cassady and Johnson, 2001, Spielberger and Vagg, 1995; see also Liebert & Morris, 1967). Worry is the cognitive component of test anxiety reflecting the debilitating thoughts and concerns the test-taker has before or during the test. The Emotionality component (sometimes called Tension) refers to the heightened physiological symptoms stemming from arousal of the autonomic nervous system and associated affective responses.

As mentioned above, the negative correlation between test anxiety and various evaluative outcomes has been found across several domains. For example, Hembree’s (1988) meta-analysis showed negative correlations between test anxiety and performance on IQ, aptitude, memory and problem-solving tests. He also found negative correlations for several school-related outcomes such as overall grades, and performance in language and mathematics tests, among other outcomes. Similar results are comprehensively reviewed by Zeidner, 1998, Zeidner, 2007. Given these findings, it is not surprising that test anxiety has been posited as a potential biasing factor in test performance. Below we review the concept of differential predictive validity, and we integrate it with two perspectives of test anxiety and test performance—the deficit and the interference perspectives.

Educational institutions routinely use scores on cognitively-loaded exams like the SAT, or the Graduate Record Examination to select students given their predictive validity vis-à-vis educational outcomes (Kuncel et al., 2001, Sackett et al., 2009. Similarly, employers often use CATs as part of a selection system given their robust predictive validity vis-à-vis job performance (Schmidt & Hunter, 1998). Thus, a key concern is whether test scores demonstrate differential prediction as a function of some personal or social factor. For example, differential prediction by race and gender has been investigated with respect to CATs (e.g., Dunbar and Novick, 1988, Schmidt et al., 1981). However, differential prediction due to psychological factors such as test anxiety, though often discussed, has not been as frequently examined empirically.

Often discussed under the broader label of predictive bias, differential prediction occurs when a third variable (such as test anxiety) influences the predictor–criterion relationship (such as the relationship between CAT scores and job performance). Consequently, predictive bias is commonly assessed within a moderated multiple regression framework, where bias is said to exist if any coefficients within the regression equation relating the predictor and criterion differ across subgroups. The issue of differential validity, which is our focus, is tested by examining the interaction of the predictor (e.g., test scores) and a potential biasing variable (e.g., anxiety) (Sackett, Laczo, & Lippe, 2003). A significant interaction term indicates that the predictive relationship with the criterion (i.e., the slope) differs across sub-groups defined by the biasing variable (e.g., high and low anxiety test-takers). If only intercept differences are present (indicated by a significant regression coefficient associated with the biasing variable but no significant interaction), this indicates that the predictor test has equal validity across groups but that the use of a single regression line would over-predict performance of the lower scoring group. Although this can lead to selection bias if a common regression line is used, we do not focus on this issue given that few real-world settings would consider using test anxiety as a protected class of information by which to define groups.

The deficit perspective specifies that anxiety results from the test-taker being aware of a skills deficit which will be (accurately) reflected by poor test performance (Covington and Omelich, 1987, Tobias, 1985). According to this perspective, “anxiety in the test situation has no causal status, but is simply an epiphenomenon reflecting students’ lack of preparation for the test and their metacognitive awareness of their low probability of succeeding on the exam” (Zeidner, 1998, p. 70). Test anxiety occurs as a byproduct of a true ability deficit, which the test accurately measures. Thus, the deficit perspective would not predict any form of differential validity due to anxiety.

Conversely, the interference perspective implies that test anxiety should result in differential validity. This perspective posits that anxiety interferes with a person’s test performance by competing for cognitive resources. Specifically, cognitive resources are spent on off-task processing such as worrying, managing immediate physiological reactions, or focusing on negative self-evaluations (Eysenck, 1973, Spielberger and Vagg, 1995). These off-task cognitions prevent the test-taker from focusing solely on the actual test, and require them to spend valuable resources on managing divergent thoughts. Thus, interference models state that ability test scores should be less predictive of criterion performance (i.e., have lower criterion-related validity) for high test anxiety test-takers compared to low anxiety ones.

However, a framework proposed by Reeve et al. (2009) suggests that test anxiety will only decrease the criterion-related validity of predictor tests to the degree that anxiety is not found in the criterion as well. Given the evaluative nature of performance appraisals, it is likely that they evoke the same situation-specific trait (i.e., test anxiety) as do other testing situations. Thus, people may experience anxiety during the evaluation of criterion performance in much the same way they do during the predictor assessment. Hence, test anxiety may not result in differential validity if anxiety experienced during the criterion is included in the model. Rather, test anxiety would actually aid in the prediction of criterion performance because the criterion would now share an additional source of systematic variance with the predictor.

Given the nature of these two perspectives, two competing hypotheses can be generated. According to the interference perspective, we would expect to find evidence of differential validity due to individual differences in test anxiety (Competing Hypothesis 1a). However, the Reeve et al. (2009) framework suggests that test anxiety may simply be an additional predictor to the extent it is experienced at the time of criterion assessment and would not be expected to result in differential validity (Competing Hypothesis 1b). The latter conclusion of no differential validity would also be consistent with the deficit perspective of test anxiety.

Section snippets

Sample

Undergraduate students enrolled in a mid-size south-eastern US university participated in this study in exchange for course credit. As explained below, data was collected across two time points but not all students participated in both sessions. Only data from the 124 participants who completed both sessions were analyzed. A majority of the operational sample was female (75.8%); 50% self-reported being White, 30.6% Black, 11.3% Asian, 5.6% Hispanic, and 2.4% self-classified as other. The

Results

Descriptive statistics, intercorrelations and Cronbach alphas are presented in Table 1. As expected, performance on the CAT is positively correlated with the final exam performance, and both dimensions of test anxiety are negatively related to performance on both the CAT and the course final exam.

Examination of differential validity (i.e., slope differences) due to anxiety experienced during predictor testing (i.e., during the CAT) was effected via moderated multiple regression. All moderated

Discussion

We sought to determine whether test anxiety can be considered a source of differential predictive validity in the relationship between CAT scores and an external performance criterion. We tested two competing hypotheses stemming from different test anxiety perspective. The deficit perspective (Tobias, 1985, and perspectives like Reeve et al., 2009) predicts test anxiety would not lead to differential validity, whereas the interference perspective (Eysenck, 1973) suggests it would. To test these

Acknowledgment

This research was funded by a Grant from the Social Sciences and Humanities Research Council of Canada. A previous version of this paper was presented at the 2011 annual conference of the Society for Industrial and Organizational Psychology.

Cited by (17)

  • Is health literacy an example of construct proliferation? A conceptual and empirical evaluation of its redundancy with general cognitive ability

    2014, Intelligence

    Compared to the published norms for a general population of college students, the current sample scored, on average, at the 30th percentile on both the verbal comprehension and reasoning tests, and at the 15th percentile on both numerical tests. Though low, these scores are consistent with the SAT scores of the local population of psychology majors typically attending the institution at which data were collected, and are consistent with performance by other samples drawn from this population of students on other ability tests (e.g., Bonaccio, Reeve, & Winford, 2012). Although the sample is somewhat homogeneous with respect to some cultural and background variables and has lower than average ability, the observed standard deviations indicate a normal amount of variance.

  • 2014, Contemporary Educational Psychology

    Ability appears to primarily influence performance directly though there is a small indirect effect via its influence on pre-exam emotions. However, consistent with other recent findings (e.g., Bonaccio et al., 2012), ability also has a joint effect on performance with distraction. Ability had a buffering effect whereby the negative influence of distraction is weakest for those with high ability and is strongest for those with low ability.

  • Interview anxiety across the sexes: Support for the sex-linked anxiety coping theory

    2013, Personality and Individual Differences

    Those who are anxious receive lower performance scores on selection instruments (Fletcher, Lovatt, & Baldry, 1997; Seipp, 1991) and are consequently less likely to be hired for the job. Importantly, empirical studies have demonstrated that the predictive validities of cognitive ability tests (e.g., Bonaccio et al., 2011; Schmit & Ryan, 1992) were lower for individuals with high levels of test-taking anxiety. However, far less research has examined whether the same effect occurs in an interview situation.

Arrow Up and RightView all citing articles on Scopus

Copyright © 2011 Elsevier Ltd. All rights reserved.

What is cognitive ability test quizlet?

Cognitive ability. is the capacity to mentally process, understand and learn info. AKA intelligence, IQ. tests for reasoning problem solving, critical thinking skills, reading comprehension (verbal ability), math ability. Cognitive ability tests: Advantages.

How is cognitive ability tested?

Each involves answering a series of questions and/or performing simple tasks. They are designed to help measure mental functions, such as memory, language, and the ability to recognize objects..
Montreal Cognitive Assessment (MoCA) test. ... .
Mini-Mental State Exam (MMSE). ... .
Mini-Cog..

Are cognitive tests difficult?

It is that difficult! On average, people answer correctly only 48% of the Cognitive Ability Test questions. For the vast majority of the jobs, if you manage to get 56% of the answers, you will very likely secure the next step of the recruitment process.

What are the two major types of ability tests?

The two major types of ability tests are aptitude and achievement. 80% of organizations use some sort of ability test in selection decisions.