64905377

Preview:

Citation preview

  • 8/12/2019 64905377

    1/15

    Assessment Center Practices in South Africa

    Diana E. Krause * , Robert J. Rossberger * , Kim Dowdeswell ** ,Nadene Venter ** and Tina Joubert **

    * Alpen-Adria University Klagenfurt, Human Resource Management and Organizational Behavior, University Street65-67, 9020 Klagenfurt, Austria. DianaEva.Krause@uni-klu.ac.at** SHL South Africa, New Muckleneuk, South Africa

    Despite the popularity of assessment centers (AC) in South Africa, no recent study existsthat describes AC practices in that region. Given this research gap, we conducted a surveystudy that analyzes the development, execution, and evaluation of ACs in N 43 SouthAfrican organizations. We report ndings regarding AC design, job analysis and job require-ments assessed, target groups and positions of the participants after the AC, number andkind of exercises used, additional diagnostic methods used, assessors and characteristicsconsidered in constitution of the assessor pool, observational systems and rotation plan,characteristics, contents, and methods of assessor training, types of information provided toparticipants, data integration process, use of self- and peer-rating, characteristics of thefeedback process, and features after the AC. Finally, we compare the results with professionalsuggestions to identify pros and cons in current South African AC practices and offer suggestions for improvement.

    1. Introduction

    W e know it well that none of us acting alone canachieve success (Mandela, 1994). This state-ment is not only true for political, economic, and socialcircumstances but also with respect to assessmentcenters (AC). In an AC, the candidates ability to performsuccessfully in a team and to communicate adequatelywith others are assessed. Among other competences,these skills of the job applicants are crucial for the future job performance of the candidate and consequently thesuccess of the organizations. A recent meta-analysis(Hermelin, Lievens, & Robertson, 2007) that considered27 validity coefcients from 26 previous studies hasshown that the predictive validity of an AC is r .28(using a relatively conservative method of estimation).AC results predict candidates future job performance,training performance, salary, promotion, etc., in severaloccupations, sectors, and countries.

    AC programs continue to spread to more countriesaround the world (Thornton & Rupp, 2005). In recentyears, ACs have been increasingly applied to internationalsettings (Lievens & Thornton, 2005, pp. 244245). One of the challenges faced by organizations operating in aninternational context is to understand cross-cultural

    variability in AC practices. It is very plausible that certainAC features that are acceptable and feasible in some

    countries (e.g., United States, United Kingdom, Switzer-land) may not be acceptable and feasible in others (e.g.,Indonesia, Philippines, South Africa). For this reason, it isimportant to increase our knowledge of AC practices indifferent countries, such as South Africa.

    The AC program was introduced into a South Africaninsurance company (Old Mutual group) by Bill Byham in1973. During the next few years, Old Mutual groupimplemented developmental centers in its offshore com-panies such as Zimbabwe, England, Thailand, Malaysia,and Hong Kong. One year later, the Edgars group was apioneer in developing and running ACs in South Africa. In1975, another South African organization, TransportServices, found out how companies in the United Stateslike AT&T and IBM identify their potential. During thenext years, Transport Services assessed 670 managersand expanded the AC as a tool for selection anddevelopmental purposes (Meiring, 2008). Other SouthAfrican organizations (e.g., Stellenbosch Farmers Winery,Department of Post and Telecommunication Services,Naspers, South African Army, and South African Police)followed soon in the development, execution, and valida-tion of the AC.

    & 2011 Blackwell Publishing Ltd.,9600 Garsington Road, Oxford, OX4 2DQ, UK and 350 Main St., Malden, MA, 02148, USA

    International Journal of Selection and Assessment Volume 19 Number 3 September 2011

    mailto:DianaEva.Krause@uni-klu.ac.atmailto:DianaEva.Krause@uni-klu.ac.atmailto:DianaEva.Krause@uni-klu.ac.atmailto:DianaEva.Krause@uni-klu.ac.atmailto:DianaEva.Krause@uni-klu.ac.atmailto:DianaEva.Krause@uni-klu.ac.atmailto:DianaEva.Krause@uni-klu.ac.atmailto:DianaEva.Krause@uni-klu.ac.at
  • 8/12/2019 64905377

    2/15

    While AC practices in South Africa have changeddramatically during the last three decades, to date noempirical study exists that describes AC practices inSouth Africa or the nature in which AC practices havechanged in South Africa over time. Yet, there are twostudies that describe AC practices in other countriessuch as the United States (Spychalski, Quinones, Gaugler,& Pohley, 1997) and German-speaking regions (Hoeft &

    Obermann, 2009; Krause & Gebert, 2003). Two notableexceptions from the trend to analyze AC practices at anational level, is the worldwide study on AC practicesconducted by Kudisch, Avis, Thibodeaux, and Fallon(2001) and the AC study by Krause and Thornton(2009). However, Kudisch et al . (2001) collapsed dataacross countries instead of reporting ndings for specicregions so that systematic differences between thecountries are concealed. The most recent study on ACpractices (Krause & Thornton, 2009) compares ACfeatures in North American and Western Europeanorganizations. Previous studies conducted in the UnitedStates, Canada, and German-speaking regions haveshown the dynamic and evolving nature of AC practices(Eurich, Krause, Cigalurov, & Thornton, 2009; Krause &Thornton, 2009). Current AC are conducted within ashorter period of time, less exercises are used, jobanalyses are conducted with a great deal of methodolo-gical effort, a shift toward developmental assessmentprograms is typical, more appropriate dimensions areused, systematic revisions of the AC are made frequently,and the AC is frequently matched with the divisions ownneeds compared to ACs designed 20 years ago.

    The current study is the rst and most comprehensivedescription of AC practices in South Africa to date. The

    country-specic approach is highly important because thendings of AC applications from other countries can notto be generalized to South Africa as the economic, social,political, and educational circumstances vary from onecountry to the next (Herriot & Anderson, 1997; Krause,2010; Newell & Transley, 2001; Ryan, Wiechmann, &Hemingway, 2003) and consequently, the AC practicesare highly heterogeneous not only within one countrybut also between countries (for differences in personnelselection practices in general see Ryan, McFarland, Baron,& Page, 1999).

    With respect to the differences in personnel selectionbetween countries, a model that explains cross-culturaldifferences in general was proposed by Klehe (2004). Themodel distinguishes between causes, constituents, con-trol forms, content, and contextual factors of personnelselection decisions. The causes and mechanisms lead tove strategic types of personnel selection decisions:acquiesce, compromise, avoidance, defy, and manipula-tion. With respect to the causes of personnel selection,the model differentiates an economic t and a social t.Regarding the economic tness, a long-term and a short-term perspective is separated two perspectives that a

    partially incompatible, judged differently by scientists andpractitioners, and require different control mechanisms.Depending on the perspective someone takes, this is, if the primary goal is to maximize the short-term prot orif the primary goal is to invest budget, personnel, andtime in a valid and reliable personnel selection system,the resulting personnel selection strategy will vary. Be-side economic conditions Klehe (2004) underlines the

    social tness which includes perceived legality and thecandidates perceived acceptance of the personnel selec-tion method. Subject to the dominant form of control inthis sociallegal structure the resulting kind of personnelselection procedure will also vary. In addition, contextualfactors as well as uncertainty and interdependencies needto be considered because these factors have an impact onthe diffusion of a personnel selection system. Overall, thismodel can also be used as a theoretical basis to explaindifferences in AC practices between countries.

    The present study aims to advance AC literature byaddressing the above-mentioned limitations in previousresearch. First, we portray a broad spectrum of ACpractices with respect to all stages of the AC process:the analysis, the design, the execution, and the evaluation(Schlebusch & Roodt, 2008, p. 16). Second, we compareSouth African AC practices with the practices in othercountries based on the aforementioned previous studies.Third, we identify pros and cons in South African ACpractices. For this purpose we used three kinds of information: South African guidelines for AC procedures(Schlebusch & Roodt, 2008, appendix A), suggestions forcross-cultural AC applications (Task Force on Assess-ment Center Operations, 2009), and scholarly papersthat indicate aspects relevant to increase the predictive

    and construct validity evidence of an AC.

    2. Method

    Data were collected via an online survey completed byHuman Resource (HR) managers of N 43 South Africanorganizations. The data collection took place from Augustto September 2009. The questionnaire was developed onthe basis of previous surveys (Krause & Gebert, 2003;Krause & Thornton, 2009; Kudisch et al ., 2001; Spychalskiet al ., 1997). A draft of the questionnaire was thenevaluated by AC scholars and practitioners from SouthAfrica, Europe, and the United States. The nal version of the questionnaire contained N 62 AC features, pre-sented in multiple choice and open-ended format. Organ-izations were selected by sampling of organizations byeconomic sector, predominantly in consulting as well asthe banking, mining, and public sector. While SHLs SouthAfrican clientele list was used as a starting point to compilethe master list, several colleagues knowledgeable in ACusage and consulting to a variety of organizations indifferent industries nominated individuals to be included

    Assessment Centers: South Africa 263

    & 2011 Blackwell Publishing Ltd.International Journal of Selection and Assessment

    Volume 19 Number 3 September 2011

  • 8/12/2019 64905377

    3/15

    in the sample, to ensure coverage as far as possible of applicable participants in the South African context. SHLSouth Africa contacted the organizations via email. Lettersof invitation and follow-up reminders were sent by SHL.To ensure the identity of the respondents, while thesurvey was presented anonymously to encourage comple-tion, at the end of the survey the respondents wereoffered the opportunity to enter their email address in

    order to receive a summary of the results. In total 60.5%(26 of 43) of the respondents did so, and all of the emailaddresses so provided were ones included on the originalmaster list. The response rate was 38.6% which isrelatively high given the length of the questionnaire.

    Respondents were asked to describe the development,execution, and evaluation of their AC. The respondentsworked in their companies as HR managers (e.g., 15%head of HR department, 3% chief department head, 18%division manager) or as personnel specialists (64%). Theirfunction in the AC included developers (16%), moder-ators (26%), and assessors (63%) (multiple responseswere possible). The respondents indicated that the ACthey described takes place in the whole company (59%)or in their specic division (41%).

    The sample was heterogeneous in terms of the eco-nomic sector (banking and insurance: 16%, consulting:16%, manufacturing: 16%, automobiles: 8%; government:8%; telecommunication: 8%, services: 6%, trade: 3%,heavy industry: 3%, others: 16%). We tested whethersectors diverge in terms of AC use, but no signicantsectorial differences in the development, operation, andevaluation of ACs were found. In this sense there is noreason to assume that the specic composition of thesample has distorted the results of our study.

    The distribution of the organizations regarding theirsize (measured by the number of employees in the wholecorporation) is: up to 500 employees: 35%; 5012,000employees: 19%; 2,0015,000 employees: 19%; 5,001 10,000 employees: 8%; 10,00120,000 employees: 0%;more than 20,000 employees: 19%. The assumption thatthe administration of AC features covaries with organiza-tional size was tested. We found that large organizationsdid indeed differ signicantly from small ones in the waythey conduct the individual measures. (Additional infor-mation about the measures that covary signicantly withthe size of an organization is available upon request.)Large organizations generally have a larger budget forpersonnel purposes, enabling them to invest more inquality ACs than smaller organizations can. We also haveto notice that some of the large organizations operatemultinationally. In principle, this makes it possible that anAC was developed in one country (e.g., United States)and then transferred to South Africa. However, 67% of the respondents indicated that the country of origin andthe country of operation were identical. Only in onethird of the cases, the AC was developed elsewhere andexecuted in South Africa.

    3. Results

    Results for the present study are presented in thefollowing categories: (a) AC design, (b) job analysismethods and job requirements assessed, (c) targetgroups and positions of the participants after the AC,(d) number and kind of exercises used, (e) additionaldiagnostic methods used, (f) assessors and characteristics

    considered in constitution of the assessor pool, (g)observational systems and rotation plan, (h) character-istics, contents, and methods of assessor training, (i)types of information provided to participants, and (j)data integration process, and use of self- and peer-ratings(k) characteristics of the feedback process, and (l)features after the AC. The percentages for each ACpractice are summarized in Table 1 and will not berepeated in the paragraphs.

    3.1. AC designProfessional experts state that the AC should be de-signed to achieve a stated objective. As shown (Table 1),two thirds of the organizations in South Africa use theAC for both goals: personnel selection as well aspersonnel development. Only a few organizations statethat the main objective of their AC is personnel devel-opment. This nding contradicts previous results of ACpractices in other countries (Krause & Gebert, 2003;Krause & Thornton, 2009) in which an increasing trendtoward developmental centers has been observed duringthe last few years. In a developmental center, candidateslearning and development over time plays a dominantrole. For those South African organizations that use the

    AC for personnel development, more than half indicatethat the main subgoal is to diagnose personnel develop-ment and training needs, followed by HR planning/suc-cession planning, and promoting to the next level oridentify potential.

    With respect to variants for assessee selection, super-visor nominations are common, but self-nomination andpersonnel ratings are not. This nding is also not in linewith practices for participants selection in organizationsin other countries (i.e., Western Europe and NorthAmerica) in which self-nomination plays a more dominantrole than in organizations in South Africa (Krause &Thornton, 2009). However, in more than half of theorganizations in South Africa it is typical that externalexperts design the AC for the particular organizations orthat the AC development is conducted by teamwork.

    Regarding the duration of the AC, we found that in82% of the organizations the ACs last up to 1 day.Compared with previous studies by Spychalski et al .(1997) (23 days), Krause and Gebert (2003) (up to 3days), and Krause and Thornton (2009) (12 days) ournding reects that ACs in South Africa are leaner thanthose in other countries. Given the need for lean

    264 Diana E. Krause, Robert J. Rossberger, Kim Dowdeswell, Nadene Venter and Tina Joubert

    International Journal of Selection and AssessmentVolume 19 Number 3 September 2011 & 2011 Blackwell Publishing Ltd.

  • 8/12/2019 64905377

    4/15

  • 8/12/2019 64905377

    5/15

    Table1. (Contd.)

    AC feature Practices inSouth Africain % (N 43)

    If one-on-one talks are simulated, who plays the role of theother person?Another participant An observer 17A role player 75A professionally trained actor 4Other 4

    Other diagnostic methods usedNone 2Biographical questionnaire 14Intelligence tests (GMA) 7Personality tests 54Skills/ability tests 49Knowledge tests 5Work sample tests 7

    Graphology Ratio participants and observer

    1 : 1 321 : 2 29

    1 : 3 294 or more 10Groups which represented in the observer pool

    Line managers 16Internal Human Resource experts 23External Human Resource experts 9Labor union 2A participants direct supervisor 4Company ofcer for woman affairs Internal psychologists 28External psychologists 42

    Criteria considered in selecting the assessor poolRace 9Ethnicity 7Age 2

    Gender 9Organizational level 9Functional work area 28Educational level 28Other 33

    Observational systems INone Qualitative aids: for example, handwrittennotes of the participants behavior

    51

    Quantitative aids, such as certain forms/systems of observation

    63

    Observational Systems IIQuantitative observational systems used

    Behavioral observation scales(BARS)

    49

    Behavioral checklists 47Realistic behavioral descriptions 26Computer-aides proles 19Graphic rating scales 9

    Rotation plan used 46Duration of observer training

    Less than a half day 111 day 252 days 213 days 74 days 7More than 4 days 4Observer training is not conducted 25

    Table1. (Contd.)

    AC feature Practices inSouth Africain % (N 43)

    Methods of observer trainingLectures 28Discussion 47Video demonstration/Camera 16Observe other assessor 33Observation of practice candidates 28Other 2

    Contents of observer trainingKnowledge of the exercises 47Knowledge of the target job 23Knowledge of the job requirements (deni-tions, demarcations)

    30

    Knowledge and sensitizing for errors of judgment

    40

    Professional behavior with the participantsduring the AC

    47

    Method of behavioral observation includinguse of behavioral systems

    47

    Ability to observe, record, and classify the

    participants behavior in job requirements

    49

    Consistency in role playing 37Ability to give accurate oral or writtenfeedback

    37

    Limits of the AC method 40Forms of reciprocal inuences in the dataintegration process

    19

    Types of forming judgments (statistical, non-statistical)

    26

    Evaluation of the qualities of observational andrating skills of each observer after the obser-ver training

    75

    Types of information provided to participants before ACHow individuals are selected for participa-tion

    26

    Tips for preparing 21Objective of the AC 65Kinds of exercises 37The storage and use of the data 21Staff and roles of observers 21The results of the AC 30How feedback will be given 60

    Job requirements/dimensions assessed in theindividual exercises are explicitly communi-cated to the participants before the exercisestart

    46

    Data integration processAssessor consensus (OAR) 32Statistical aggregation 7Combination of OAR and statistical aggre-

    gation

    61

    Voting Observers complete report before integrationprocess begins

    75

    Poor results in some exercises can be com-pensated by good results in other exercises

    86

    Poor results regarding certain characteristicscan be compensated by good results regardingother characteristics

    54

    Use of peer-ratings 18Use of self-rating 29

    Kind of feedback Oral 18

    266 Diana E. Krause, Robert J. Rossberger, Kim Dowdeswell, Nadene Venter and Tina Joubert

    International Journal of Selection and AssessmentVolume 19 Number 3 September 2011 & 2011 Blackwell Publishing Ltd.

  • 8/12/2019 64905377

    6/15

  • 8/12/2019 64905377

    7/15

    technique (Flanagan, 1954). This method facilitates todetermine critical behaviors related to the target posi-tion. The resulting information makes it possible todistinguish between successful and unsuccessful job can-didates. For both managers as well as employees, thecritical incident technique would be well suited toclustering the job requirements related to a specicposition.

    Regarding the kind of job requirements being assessed,we used the results of two meta-analyses (Arthur, Day,McNelly, & Edens, 2003; Bowler & Woehr, 2006). Thesetwo studies found six construct and criterion validdimensions: communication, consideration/awareness of others, drive, inuencing others, organization and plan-ning, and problem solving. With respect to the presentstudy we found that two thirds of the South Africanorganizations assess communication, organizing and plan-ning, problem solving, and inuencing others (see Table1). These four dimensions were among the most popularin Kudisch et al .s (2001) sample as well as in Krause andThorntons sample. These four job requirements ac-counted for 20% of the variance in performance in themeta-analysis by Arthur, Day, McNelly, and Edens (2003).In the recent meta-analysis by Dilchert and Ones (2009)the relevance of these dimensions for job performancewas supported: The best dimensional predictor for jobperformance was problem solving, followed by inuen-cing others, and organizing and planning, and commun-ication skills. Given that, it should not be too surprisingthat these four dimensions were also predictive for awork-related criterion (salary) in a recent study byLievens, Dilchert, and Ones (2009).

    Therefore, we can conclude that the most popular

    dimensions being assessed in South Africa are also theones with the most predictive validity evidence.With respect to the number of job requirements

    assessed, 80% of the organizations assess more thanve dimensions per AC and more than two thirds upto nine dimensions per exercise. While this trend isreected in the Guidelines for Assessment and DevelopmentCentres in South Africa, which allows for typically no morethan 10 dimensions per AC and ve to seven per exercise(Assessment Centre Study Group, 2007), compared withother countries (Krause & Gebert, 2003; Krause &Thornton, 2009; Spychalski et al ., 1997), organizationsin South Africa assess more job requirements per AC andper exercise. The assessment of more than ve dimen-sions per AC increases the likelihood that the dimensionsare not distinguishable and, consequently, that the asses-sors can not differentiate among the behavioral cat-egories. Several studies have shown that an ACs con-struct validity decreases as the number of assesseddimensions increases (Bowler & Woehr, 2006). In con-clusion, results from the present study illustrate thatcurrent ACs in South Africa could be improved by usingfewer job requirements.

    3.3. Target groups and positions of the participantsafter the AC

    It is most common (see Table 1) to conduct the AC forinternal and external candidates. Forty-three percent of the organizations assess two to four candidates per AC;30% assess ve to seven candidates per AC; and 23%assess eight to 10 candidates per AC. As shown, the ACis conducted for candidates of all organizational levels(see Table 1). Almost all organizations assess up to 100candidates within a period of 6 months up to 1 year. It ismost typical to assess internal and external rst andsecond line managers. The AC program is used lessfrequently for internal and external trainees or entrylevel staff. After the AC, the candidates become rst,second, or third line managers.

    3.4. Number and kind of exercises used

    South African organizations use a wide variety of ex-ercises (see Table 1). However, the absolute amount of exercises used in South Africa is lower compared toother countries in North America and Western Europe(Krause & Thornton, 2009). In line with the trend toleaner AC programs, nearly half of the organizations useless than three exercises per AC. The other half uses fourto ve exercises or more. Overall, the number of usedexercises is in need of improvement: An ACs predictivevalidity evidence increases as the number of exercisesincreases (Gaugler, Rosenthal, Thornton, & Benson,1987).

    A positive sign in current South African AC practices isthat in nearly all organizations linkages between the

    assessed job requirements and exercises are documentedin a competency by exercise matrix (see Table 1).Although, counter to suggestions (Task Force on Assess-ment Center Operations, 2009), only half of organiza-tions in South Africa pretest the exercises beforeimplementation. Although this is understandable giventhe cost involved, organizations in South Africa shouldinvest more time, money, and personnel in pilot tests of exercises to maximize the validity of the AC program.

    The most frequently used exercises in South Africa(see Table 1) are in-basket exercises, presentations androle playing, followed by group discussions. These nd-ings are in line with the most frequently used exercises inthe United States and Canada (Krause & Thornton,2009). In these countries, in-baskets, presentations, androle playing are also very popular. These results aresimilar to the ndings by Krause and Gebert (2003),who found for German-speaking regions that presenta-tions and group discussions were the most frequentlyused exercises in Germany. The frequent use of these andnot other exercises can be explained in terms of the ACssocial acceptance. For example, organizations inmany countries prefer exercises that demonstrate the

    268 Diana E. Krause, Robert J. Rossberger, Kim Dowdeswell, Nadene Venter and Tina Joubert

    International Journal of Selection and AssessmentVolume 19 Number 3 September 2011 & 2011 Blackwell Publishing Ltd.

  • 8/12/2019 64905377

    8/15

    candidates ability to deal with complex tasks. Thesekinds of exercises are presumably perceived to bemore activity specic than other kinds of exercises. Theuse of presentations and role playing is consistent withThornton and Rupps (2005) argument that situationalexercises are still the mainstay of ACs (for detailsregarding task-based ACs see Jackson, Stillman, & Englert,2010). It might be that the popularity of presentations

    and role playing has to do with the increasing people-focused demands of the workplace.In terms of role playing we were interested in the

    question of who plays the other person. As shown (seeTable 1), in nearly all cases a role player or an observerplays the role of the other person if one-to-one talks aresimulated. Although it would increase the costs involvedin the AC process, we suggest that a professionallytrained actor should play the role of the other personin one-to-one simulations because it would increase theobjectivity of the exercise. Contrariwise, an ACs con-struct validity decreases if an assessor is involved in one-to-one talks (Thornton & Mueller-Hanson, 2004).

    3.5. Additional diagnostic methods used

    In addition to the behavioral exercises, only a minority of organizations uses at least one other assessment method.Only half of the organizations include a personality testor a skill and ability test within the context of the AC. It isnot very common to include other diagnostic methods inthe AC, such as biographical questionnaires, work sampletests, intelligence (general mental ability [GMA]) orknowledge tests. These results parallel those of previousstudies in North America (Krause & Thornton, 2009) as

    well as in Western Europe (Krause & Gebert, 2003;Krause & Thornton, 2009). The rare use of testingprocedures such as biographical questionnaires, work sample tests, and intelligence tests can be explained bythe fact that they are not always well accepted by HRexperts. The reluctance to use tests such as knowledgeand intelligence tests in South Africa as part of the ACprogram is particularly strong (because of racial subgroupdifferences, the ndings of large mean differences acrossracial and ethnic groups that make validation moreimperative and difcult). Furthermore, the use of testswithin the context of the AC itself is usually limitedbecause of an interest in focusing on overt behavior. As awhole, intelligence tests, and knowledge tests are used bya minority of South African organizations as a part of theAC. Nonetheless, there is empirical evidence supportinghigher predictive validity when ACs are combined withcognitive ability tests (Dayan, Fox, & Kasten, 2008; Dayan,Kasten, & Fox, 2002; Dilchert & Ones, 2009; Krause,Kersting, Heggestad, & Thornton, 2006; Lievens, Harris,Van Keer, & Bisqueret, 2003; Meriac, Hoffman, Woehr, &Fleisher, 2008). Furthermore, work sample tests thathave a high predictive validity (r .54, see Schmidt &

    Hunter, 1998) and are evaluated favorable by candidates(Anderson, Salgado, & Huelsheger, 2010) also used rarelyby most of the South African organizations. Given thatstate of the art we encourage South African organizationsto rethink about the integration of at least one additionaldiagnostic method within the context of the AC program.This practice could be benecial for the predictive validityevidence of the overall AC program.

    3.6. Assessors and characteristics considered inconstitution of the assessor pool

    With regard to the ratio of participants to assessors, wefound that the most typical ratio is 1 : 2, which is in linewith professional recommendations and the practices inother countries (Hoeft & Obermann, 2009; Krause &Gebert, 2003; Krause & Thornton, 2009; Spychalski et al .,1997). Two other aspects of AC programs in South Africamight be interesting, namely which groups are repres-ented in the observer pool and which criteria areconsidered in the constitution of the observer pool.Consistent with ACs in other countries (Krause &Gebert, 2003; Spychalski et al ., 1997), the assessor poolin South Africa consists of various functional groups,creating a broad composition for judging the assessees,which is a positive sign in current AC practices in SouthAfrica. Assessors are, to a large extent HR professionals.In comparison to other countries, line managers servesignicantly less often as assessors than in North Americaor Western Europe (Krause & Thornton, 2009). Thelower integration of line managers as assessors can beinterpreted in the context of specic labor legislation:Due to the South African Employment Equity Acts (no.

    55 of 1998) prohibition of unfair discrimination in em-ployment practices (including assessments), organizationsare typically fairly conscious of the need to be able todefend the legality of their actions. To facilitate thisprocess guidance can be drawn from best practicepublications such as the Assessment Centre StudyGroups Guidelines for Assessment and Development Centresin South Africa (2007), which recommend as a minimumqualication for an assessor an honors or masters degreein behavioral science (i.e., Industrial and OrganisationalPsychology, or HR Management). In this sense, personneldecisions in South Africa are less strongly legitimatized byhierarchy than in North America or Western Europe.However, research has documented that the integrationof line managers into the assessor pool increases an ACsconstruct validity (Lievens, 2002). It is also shown thatone third of the South African organizations use internalpsychologists and nearly half of the organizations useexternal psychologists as assessors. There is evidencethat when psychologists serve as assessors the predictivevalidity and the construct validity of an AC rises (Gaugleret al ., 1987; Lievens, 2002; Sagie & Magnezy, 1997). In thisrespect, South African organizations might consider the

    Assessment Centers: South Africa 269

    & 2011 Blackwell Publishing Ltd.International Journal of Selection and Assessment

    Volume 19 Number 3 September 2011

  • 8/12/2019 64905377

    9/15

  • 8/12/2019 64905377

    10/15

    between the various job requirements. The nding thatthese content areas are trained less frequently has to beseen as counterproductive because the quality of the AC measured by its predictive and construct validity isreduced. These ndings suggest that organizationsneed to improve the contents of assessor training. Finally,we found that following the completion of assessortraining, South African organizations often evaluate each

    assessor on his or her qualities of observational andrating skills.

    3.9. Types of information provided to participants

    Virtually all organizations provide some sort of informa-tion to participants before the AC (see Table 1). Typically,participants in South Africa receive information about theobjective of the AC and about the feedback process.Other kinds of information such as how the results willbe used, the type of exercises, the storage of data, thestaff and observers, how individuals are selected, andhow candidates can prepare themselves for the center,are rarely provided. Another question is whether the jobrequirements are explicitly communicated before theexercise starts. Kleinmann (1997) called that the prin-ciple of transparency. If one follows this principle, weincrease the validity of the center. As shown, half of theorganizations communicate the kind of job requirementsto the participants before the exercise starts the otherhalf ignores the principle of transparency.

    Compared with other countries (see Krause & Thorn-ton, 2009), South African candidates receive relativelylittle information before they participate in the AC. Given

    the emphasis on informed participation that is made inboth the international Guidelines and Ethical Considerationsfor Assessment Center Operations (Task Force on Assess-ment Center Guidelines, 2009) as well as in South AfricasGuidelines for Assessment and Development Centres in South Africa (Assessment Centre Study Group, 2007) theinformation policy toward the participants is in need of improvement. Thornton and Rupp (2005) found thatwhen sufcient and frequent information was providedto participants, the ACs were generally more accepted,compared to instances where insufcient and lessfrequent information was provided. To improve theacceptance of ACs and its results, organizations mayneed to provide participants with more information.That might also have positive effects on the commitmentof the internal candidates after the AC and on personalmarketing issues on the labor market. Frequent informa-tion about several topics involved in the AC processshould inuence the candidates reactions towardthe AC positively (for detailed information regardingcandidates reaction toward 10 personnel selectionmethods in 17 countries, see Anderson, Salgado, &Huelsheger, 2010).

    3.10. Data integration process, and the use of self-and peer-rating

    With respect to data integration, approximately twothirds of the organizations use a combination of assessorconsensus discussions and statistical aggregation. In addi-tion, approximately one third uses assessor consensusdiscussion, while the least frequently used method ispurely statistical aggregation. These ndings are in con-trast to earlier studies (Krause & Gebert, 2003; Kudischet al ., 2001; Spychalski et al ., 1997) in which a much higheramount of organizations used a consensus discussion.The trend to combine assessors consensus informationwith statistical aggregation may be a result of at least twofactors. First, statistical integration may ensure overallratings that are just as accurate as consensus ratings(Thornton & Rupp, 2005). Second, the need for publicorganizations to increase the appearance of objectivityassociated with statistical integration, in contrast to theapparently subjective consensus discussion.

    Furthermore, in many South African organizations the

    observers complete a report before the data integrationprocess starts. It is also worthwhile to mention that inmost organizations candidates can compensate theirpoor performance in some exercises by good perform-ance in other exercises or their poor performance insome dimensions by good performance in other dimen-sions. In terms of the integration of self-ratings (i.e., thecandidates judgment of own performance) and the use of peer-ratings (i.e., the candidates evaluation of the perform-ance of his or her colleagues), we have to note that thoseare rarely used in South African organizations. Thisnding is consistent with the frequency in which self-and peer-ratings are used in other countries. The use of self- and peer-rating has decreased during the last 15years, although these ratings can provide new insightsabout the participants. Self- and peer-ratings can be usedas diagnostic information in addition to the ratings madeby the assessors.

    3.11. Characteristics of the feedback process

    With regard to the feedback process, it is obvious thatthe most common ways of delivering feedback is acombination of oral and written methods (see Table 1).Here, because AC feedback is likely complex, writtenfeedback alone could lead to frustration, confusion, andlack of understanding and therefore could lead tonegative work outcomes, including reduced organiza-tional commitment. The frequencies in the kind of feed-back are similar to those identied in earlier studies(Krause & Gebert, 2003; Kudisch et al ., 2001; Spychalskiet al ., 1997).

    Research on the timing of feedback indicates thatfeedback is most valuable when it is given immediatelyafter a behavior (Thornton & Rupp, 2005). Unfortunately,

    Assessment Centers: South Africa 271

    & 2011 Blackwell Publishing Ltd.International Journal of Selection and Assessment

    Volume 19 Number 3 September 2011

  • 8/12/2019 64905377

    11/15

    only 7% of the organizations in South Africa providefeedback to participants immediately after AC comple-tion. The majority of organizations provide feedback within 1 week, or more than 1 week, after the AC. SouthAfrican organizations need to be encouraged to providemore timely feedback. Thornton et al . (1992) found thatthe maximum learning occurred and the most behaviorswere corrected when feedback was immediate. Feedback

    is given by an observer, external expert, or employee of the personnel department. Finally, the feedback includesinformation about the overall assessment rating andspecic dimensions. In South Africa, it is relatively unusualto provide feedback on ratings in each exercise. Organ-izations standardize their feedback procedure in terms of its content and its medium to reduce uncertainty duringthis nal AC stage.

    3.12. Features after the AC

    As shown (see Table 1), the participants, the departmenthead, and the direct supervisor are most informed aboutthe participants AC performance. In terms of condenti-ality and storage of results, access should be restricted tothose with a need to know and in accordance with whathas been agreed with the respondent during the ACadministration. Interestingly, in contrast to data protec-tion regulations such as the European Union Directive onData Protection and the US Safe Harbor Privacy Princ-iples, the South African Protection of Personal Informa-tion Bill is not yet law and as such is not yet legallybinding. While the privacy of communications is coveredin the South African Electronic Communications andTransactions Act (no. 25 of 2002), there is not any case

    law on data protection, nor any legislation dealing specif-ically with data privacy (Michalson, 2009). In one third of the cases, someones AC performance is stored in thecandidates personnel le. Only half of the South Africanorganizations provide the possibility for reassessment.This point would depend on the time period that haspassed before reassessment is requested; in selectionscenarios AC data should be utilised within 2 years of administration (Task Force on Assessment Center Guide-lines, 2009). Another essential feature after the ACprogram is the evaluation procedure. Only two thirdsof the organizations reported that any method of evalua-tion exists although the evaluation stage is part of thelegislative requirements in South Africa. Section 8 of theEmployment Equity Act (1998) prohibits psychologicaltesting and other similar assessments of an employeeunless the test or assessment being used has beenscientically shown to be valid and reliable, can be appliedfairly to all employees and is not biased against anyemployee or group. Options to demonstrate the validityand reliability of assessment measures include either in-house studies or detailed analysis of the job supportingcontent validity, or validity generalization of previous

    studies to the position in question. This result is con-sistent with the reported validation frequency in Kudischet al .s study (2001) in the United States, which found thattwo thirds of organizations carry out some sort of validation. It might be strategically risky for one third of the South African organizations to neglect an evaluationprocess, or at least organizations should document thecontent validity evidence or validity generalization evi-

    dence supporting the applicability of the AC for the rolein question, because no organization today is able toafford ineffective, inefcient, or indefensible AC proced-ures. Among those reporting some form of evaluation,only 40% of the organizations reported that writtendocuments existed describing the evaluation and only21% stated that an external expert was involved in theevaluation process. In the two thirds of cases wheresystematic evaluation was carried out, the most commonevaluation criteria were objectivity, reliability, predictivevalidity, content validity, and construct validity. Statisticaltesting of concurrent validity evidence is one featuremissing in most South African organizations. Based onthese ndings, we can conclude that the evaluationprocess is insofar in need of improvement as writtendocuments should be used and an external expert shouldconduct the evaluation of the overall AC program.

    4. Discussion

    This study lls two gaps in research on AC practices. Therst comprehensive South African survey of a wide varietyof AC features was conducted. Positive and negativetrends in current South African AC practices have beenidentied and compared with previous surveys of ACpractices in other countries. In the following section, wediscuss study limitations and directions for future ACresearch. Finally, we offer suggestions for ways in whichSouth African HR experts can improve their ACs.

    4.1. Study limitations

    Our study goes well beyond previous AC research byinvolving a country in which no empirical study on ACpractices had been conducted. In using our approach,however, there are a number of limitations worth noting.Whereas past studies on AC use have had samples of over 100 organizations (Kudisch et al ., 2001: N 115;Spychalski et al ., 1997: N 215), our sample is moremodest and consistent with two studies of similar samplesizes (Krause & Gebert, 2003: N 75; Krause & Thorn-ton, 2009: Western Europe N 45, North AmericaN 52). As past work has pointed out, many HRdepartments are overwhelmed with surveys, thus causingmany to be dropped in the bin (Fletcher, 1994, p. 173).Nevertheless, future research is encouraged to replicateour ndings with a larger sample size and broader

    272 Diana E. Krause, Robert J. Rossberger, Kim Dowdeswell, Nadene Venter and Tina Joubert

    International Journal of Selection and AssessmentVolume 19 Number 3 September 2011 & 2011 Blackwell Publishing Ltd.

  • 8/12/2019 64905377

    12/15

    representation of industries than the current study.Another concern is that most of our measures werebased on single survey questions. We only surveyed oneindividual per organization. One assumption inherent inthis approach is that HR experts provide accuratedescriptions about their AC practices (see Fletcher,1994). Seeking to obtain parallel descriptions of AC usefrom additional experts within each company would have

    seriously jeopardized the return rate. Consequently, ourmethod does not allow interrater reliability to be calcu-lated. Future research is encouraged to replicate ourndings using an approach in which two or three expertsper organization will be surveyed. Follow-up studies arealso encouraged to analyze the kind of adaptationsrequired to operate AC practices in South Africanorganizations that operate multinationally.

    4.2. Suggestions to improve South African AC practices

    The results show that South African organizations couldimprove their AC practices. Before we summarize theseaspects, we point out the cons in South African ACpractices AC features that should remain the same inthe future. The ndings have shown that sophisticatedmethods of job analysis (e.g., competency models) areused a trend that is positive compared with othercountries. Furthermore, South African organizations as-sess those dimensions with a high construct validity andpredictive validity (Arthur et al ., 2003; Bowler & Woehr,2006; Dilchert & Ones, 2009), the four dimensions thatare assessed by most of the organizations are those thatpredict the candidates future job performance accurately.

    However, future AC programs are encouraged to con-sider assessor constructs in use as an important part of the validity of their programs (see Jones & Born, 2008).Another positive trend is that a mixture of a broadspectrum of exercises is used. It is also worth to mentionthat a combination of OAR and statistical aggregation isused to integrate the data of the AC. Although thesefeatures are carried out in an adequate manner, there areis still room for improvement.

    South African organizations should assess less jobrequirements. In doing so one increases the predictivevalidity and construct validity evidence of the AC pro-gram. In opposite, if too many dimensions are assessed,the observers cannot distinguish among them whichdecrease the construct validity of the AC. To increasethe accuracy and effectiveness of the program, one shouldalso increase the duration of the AC. In the design stage, itis highly important to develop the AC entirely to thedivisions own needs. Standard ACs and adaptations of standard ACs should be used less frequently to derivevalid predictions from the AC results. Additionally, organi-zations need to improve their current AC practices byconducting pilot tests of exercises before implementation.

    Moreover, HR experts should consider whether it ismeaningful to integrate additional diagnostic proceduresmore frequently than in the past to increase the validity of their AC. Organizations should also consider relevantcriteria (e.g., gender, race, ethnicity, educational level,age, functional work area, organizational level) in select-ing the assessor pool. This strategy would enhance theprobability that the assessor pool is balanced in terms of

    these criteria. To improve the AC, it is also important toenhance the contents of the observer training. It seemsnecessary to enlarge coverage of topics, such as therelationship between dimensions and job performance,the ability to observe the dimensions independently, theability to distinguish between the various dimensions, andthe ability to focus on those dimensions for which theexercise has been designed. To facilitate the assessorslearning, organizations should think about the appropri-ate methods used in observer training. It might be helpfulnot only use the discussion format but also videodemonstration/camera or the observation of real candid-ates or the observation of other assessors. During thenal stages of the AC, the information policy towardparticipants should be improved which would lead tohigher acceptance of the AC program and commitmentto the organization. The perceptions and reactions of candidates after the AC should be considered in moredetail as it is common in personnel selection in othercountries (Anderson & Goltsi, 2006; Huelsheger &Anderson, 2009). In addition, feedback should be pro-vided in a timely fashion, ideally immediately after thecompletion of the AC. Furthermore, continual statisticalevaluation of the AC is needed by all organizations tomonitor the quality of AC practices. Organizations

    should also consider a third-party involvement in theAC evaluation and to document the evaluation processand its outcomes in a written manner. Although anevaluation procedure is costly and time intense, thisnecessity seems unavoidable in order to improve thequality control of an organizations personnel selection,promotion, and development decisions.

    Acknowledgements

    Portions of this paper were presented as a keynoteaddress at the 30th Annual Assessment Center StudyGroup Conferences. Stellenbosch, Western Cape March1719, 2010, South Africa. We thank two anonymousreviewers and the editor for their constructive feedback on a previous version of this paper.

    References

    Anderson, N., & Goltsi, V. (2006). Negative psychological effectsof selection methods: Construct formulation and an empirical

    Assessment Centers: South Africa 273

    & 2011 Blackwell Publishing Ltd.International Journal of Selection and Assessment

    Volume 19 Number 3 September 2011

  • 8/12/2019 64905377

    13/15

    investigation into an assessment center. International Journal of Selection and Assessment, 14 , 236255.

    Anderson, N., Salgado, J. F., & Huelsheger, U. R. (2010).Applicant reactions in selection: Comprehensive meta-analy-sis into reaction generalization versus situational specicity.International Journal of Selection and Assessment, 18 , 291304.

    Arthur, W. Jr., Day, E. A., McNelly, T. L., & Edens, P. S. (2003). Ameta-analysis of the criterion-related validity of assessmentcenter dimensions. Personnel Psychology , 56 , 125154.

    Assessment Centre Study Group. (2007). Guidelines for assess-ment and development centres in South Africa 4th ed. Assess-ment Centre Study Group (ACSG), Stellenbosch. Available athttp://www.acsg.co.za.

    Bowler, M. C., & Woehr, D. J. (2006). A meta-analytic evaluationof the impact of dimension and exercise factors on assess-ment center ratings. Journal of Applied Psychology , 91, 1114 1124.

    Dayan, K., Fox, S., & Kasten, R. (2008). The preliminaryemployment interview as a predictor of assessment centeroutcomes. International Journal of Selection and Assessment, 16 ,102111.

    Dayan, K., Kasten, R. and Fox, S. (2002). Entry-level policecandidate assessment center: An efcient tool or a hammerto kill a y? Personnel Psychology , 55 , 827849.

    Dilchert, S., & Ones, D. S. (2009). Assessment center dimen-sions: Individual differences correlates and meta-analyticincremental validity. International Journal of Selection and Assessment, 17 , 254270.

    Electronic Communications and Transactions Act, no 25 (2002).Government Gazette, 446 (23708). Cape Town: GovernmentPrinters.

    Employment Equity Act, no 55 (1998). Government Gazette, 400(19370). Cape Town: Government Printers.

    Eurich, T., Krause, D. E., Cigularov, K., & Thornton, G. C. III(2009). Assessment centers: Current practices in the UnitedStates. Journal of Business and Psychology , 24 , 387407.

    Flanagan, J. C. (1954). The critical incidents technique. Psycho-logical Bulletin, 51 , 327358.Fletcher, C. (1994). Questionnaire surveys of organizational

    assessment practices: A critique of their methodology andvalidity, and a query about their future relevance. International Journal of Selection and Assessment, 2, 172175.

    Gaugler, B. B., Rosenthal, D. B., Thornton, G. C. III, & Benson,C. (1987). Meta-analysis of assessment center validity. Journal of Applied Psychology , 72 , 493511.

    Hennessy, J., Mabey, B., & Warr, P. (1998). Assessment centreobservation procedures: An experimental comparison of traditional, checklist and coding method. International Journal of Selection and Assessment, 6, 222231.

    Hermelin, E., Lievens, F., & Robertson, I. T. (2007). The validityof assessment centres for the prediction of supervisoryperformance ratings: A meta-analysis. International Journal of Selection and Assessment, 15 , 405411.

    Herriot, P., & Anderson, N. (1997). Selecting for change: Howwill personnel and selection psychology survive? In Anderson,N.R. and Herriot, P. (Eds.), International handbook of selectionand assessment (pp. 132). London: Wiley.

    Hoeft, S. and Obermann, C. (2009). Was ist ein AssessmentCenter. Annaherung an eine unscharfe Verfahrensklasse [What isan assessment center? Approximation to an unclear method].

    Presented at the 6th Congress of Work and OrganizationalPsychology. September 911, Vienna, Austria.

    Huelsheger, U. R., & Anderson, N. (2009). Applicant perspec-tives in selection: Going beyond preference reactions. Inter-national Journal of Selection and Assessment, 17 , 335345.

    Jackson, D., Stillman, J. A., & Englert, P. (2010). Task-basedassessment centers: Empirical support for a systems model.International Journal of Selection and Assessment, 18 , 141154.

    Jones, R., & Born, M. (2008). Assessor constructs in use as the

    missing component in validation of assessment center dimen-sions: A critique and directions for research. International Journal of Selection and Assessment, 16 , 229238.

    Klehe, U. C. (2004). Choosing how to choose: Institutionalpressures affecting the adoption of personnel selectionprocedures. International Journal of Selection and Assessment,12 , 327342.

    Kleinmann, M. (1997). Assessment Center: Stand der Forschung Konsequenzen fu r die Praxis [The state of research on theassessment center Consequences for practice]. Goettingen:Hogrefe.

    Kleinmann, M. (2003). Assessment center . Gottingen: Hogrefe.Krause, D. E. (2010). Trends in international personnel selection.

    Goettingen: Hogrefe.Krause, D. E., & Gebert, D. (2003). A comparison of assessment

    center practices in organizations in German-speaking regionsand the United States. International Journal of Selection and Assessment, 11 , 297312.

    Krause, D. E., Kersting, M., Heggestad, E. D., & Thornton, G. C.(2006). Incremental validity of assessment center ratings overcognitive ability tests. A study at the executive managementlevel. International Journal of Selection and Assessment, 14(4),360371.

    Krause, D. E., & Thornton, G. C. III (2009). A cross-cultural look at assessment center practices: A survey in Western Europeand Northern America. Applied Psychology: An International Review , 58 (4), 557585.

    Kudisch, J. D., Avis, J. M., Thibodeaux, H. and Fallon, J. D. (2001). A survey of assessment center practices in organizations world-wide: Maximizing innovation or business as usual? Paper pre-sented at the 16th annual conference of the Society forIndustrial and Organizational Psychology, San Diego, CA.

    Lance, C. E., Lambert, T. A., Gewin, A. G., Lievens, F., & Conway, J. M. (2004). Revised estimates of dimension and exercisevariance components in assessment center post exercisedimension ratings. Journal of Applied Psychology , 89 , 377385.

    Lievens, F. (2002). Trying to understand the different pieces of the construct validity puzzle of assessment centers: Anexamination of assessor and assessee effects. Journal of Applied Psychology , 87 , 675686.

    Lievens, F., Dilchert, S., & Ones, D. R. (2009). The importance of exercise and dimension factors in assessment centers: Simul-taneous examinations of construct-related and criterion-related validity. Human Performance, 22 , 375390.

    Lievens, F., Harris, M. M., Van Keer, E., & Bisqueret, C. (2003).Predicting cross-cultural training performance: The validity of personality, cognitive ability, and dimensions measured by anassessment center and a behavioral description interview. Journal of Applied Psychology , 88 , 476489.

    Lievens, F., & Thornton, G. C. III (2005). Assessment centers:Recent developments in practice and research. In A. Evers, N.

    274 Diana E. Krause, Robert J. Rossberger, Kim Dowdeswell, Nadene Venter and Tina Joubert

    International Journal of Selection and AssessmentVolume 19 Number 3 September 2011 & 2011 Blackwell Publishing Ltd.

    http://www.acsg.co.za/http://www.acsg.co.za/
  • 8/12/2019 64905377

    14/15

    Anderson, & O. Voskuijl (Eds.), The Blackwell handbook of personnel selection (pp. 243264). Malden, MA: Blackwell.

    Mandela, N. (1994). Statement of the president of the AfricanNational Congress Nelson Rolihlahla Mandela at his Inaugurationas president of the Democratic Republic of South Africa UnionBuildings. Pretoria, May 10, South Africa.

    Meiring, D. (2008). In G. Roodt, & S. Schlebusch (Eds.), Assess-ment centers (pp. 2132). Johannesburg: Knowres Publishing.

    Melchers, K.G., Kleinmann, M. and Prinz, M.A. (2010). Do

    assessors have too much on their plates? The effects of simultaneously rating multiple assessment center candidateson rating quality. International Journal of Selection and Assess-ment, 18 , 329341.

    Meriac, J. P., Hoffman, B. J., Woehr, D. J., & Fleisher, M. S. (2008).Further evidence for the validity of assessment centerdimensions: A meta-analysis of the incremental criterion-related validity of dimension ratings. Journal of Applied Psychol-ogy , 93 , 10421052.

    Michalson, L. (2009). Protection of Personal Information Bill theimplications for you. Available at http://www.michalsons.com/protection-of-personal-information-bill-the-implications-for-you/

    Newell, S., & Transley, C. (2001). International use of selectionmethods. In C. L. Cooper, & I. T. Robertson (Eds.), Interna-tional review of industrial and organizational psychology (Vol. 21,pp. 195213). Chichester: Wiley.

    Reilly, R. R., Henry, S., & Smithers, J. W. (1990). An examinationof the effects of using behavior checklists on the constructvalidity of assessment center dimensions. Personnel Psychology ,43 , 7184.

    Ryan, A. M., McFarland, L., Baron, H., & Page, R. (1999). Aninternational look at selection practices: Nation and cultureas explanations for variability in Practice. Personnel Psychology ,52 , 359391.

    Ryan, A. M., Wiechmann, D., & Hemingway, M. (2003). Design-ing and implementing global stafng systems: Part II Bestpractices. Human Resource Management, 42 , 8594.

    Sagie, A., & Magnezy, R. (1997). Assessor type, number of distinguishable categories, and assessment centre constructvalidity. Journal of Occupational and Organizational Psychology ,70 , 103108.

    Schlebusch, S., & Roodt, G. (2008). Assessment centers. Johan-nesburg: Knowres Publishing.

    Schmidt, F. L., & Hunter, J. E. (1998). The validity and utility of selection methods in personnel psychology: Practical andtheoretical implications of 85 years of research nding.Psychological Bulletin, 124 , 262274.

    Spychalski, A. C., Quinones, M. A., Gaugler, B. B., & Pohley, K.(1997). A survey of assessment center practices inorganizations in the United States. Personnel Psychology , 50,7190.

    Task Force on Assessment Center Guidelines. (2009).Guidelines and ethical considerations for assessment centeroperations. International Journal of Selection and Assessment, 17 ,243254.

    Thornton, G. C. III, Gaugler, B. B., Rosenthal, D., & Bentson, C.(1992). Die pradiktive Validitat des Assessment Center eineMetaanalyse [The predictive validity of the assessment center A meta-analysis]. In Schuler, H. and Stehle, W. (Eds.), Assessment-Center als Methode der Personalentwicklung (2nded., pp. 3660). Goettingen: Hogrefe.

    Thornton, G. C. III, & Mueller-Hanson, R. (2004). Developing organizational simulations: A guide for practitioners and students.Mahwah, NJ: Lawrence Erlbaum Associates.

    Thornton, G. C. III, & Rupp, D. R. (2005). Assessment centers inhuman resource management: Strategies for prediction, diagnosis,and development. Mahwah, NJ: Erlbaum.

    Assessment Centers: South Africa 275

    & 2011 Blackwell Publishing Ltd.International Journal of Selection and Assessment

    Volume 19 Number 3 September 2011

    http://www.michalsons.com/protection-of-personal-information-bill-the-implications-for-you/http://www.michalsons.com/protection-of-personal-information-bill-the-implications-for-you/http://www.michalsons.com/protection-of-personal-information-bill-the-implications-for-you/http://www.michalsons.com/protection-of-personal-information-bill-the-implications-for-you/
  • 8/12/2019 64905377

    15/15

    Copyright of International Journal of Selection & Assessment is the property of Wiley-Blackwell and its

    content may not be copied or emailed to multiple sites or posted to a listserv without the copyright holder's

    express written permission. However, users may print, download, or email articles for individual use.