Description:
This study examined several issues related to evaluating job analysis information. The first issue concerned the relationship between job analysis reliability and accuracy and the utilization of these variables to gain an estimation of job analysis validity. In this study, the performance on reliability and accuracy indices was compared to determine the extent to which such indices identify a consistent set of reliable and accurate raters. Similar to Green and Stutzman (1986), this study also compared the rating profile of the selected accurate and reliable raters with that of the entire rater population. The second issue concerned the impact of other individual differences concerning job tenure and experience on the validity of job analysis ratings. In addition, the effect of individual rater's fatigue may have on job analysis reliability was explored. Finally, the study addressed whether the distinction between in-role and extra-role behavior may help explain some of the variance in job analysis ratings. Thus, the study examined whether the nature of the task has any impact on job analysis reliability or accuracy. A significant relationship was found between reliability and accuracy. Highest mean reliabilities were found for the reliable raters as compared to accurate raters and the incumbent population. Thus, the impact of the reliability of raters on the validity of the instrument was demonstrated. A correlation analysis among reliability and accuracy scores and individual difference variables revealed a significant negative correlation between reliability and organization tenure. No significant relationships were found between education and reliability scores or between education and accuracy scores. An analysis of reliability over the course of a job analysis inventory showed that mean reliabilities trend downwards initially and slope slightly back upwards. The downwards trend may suggest that fatigue impacts the reliability of the rater over time. The sudden change in trend could indicate that point at which incumbents changed survey format (from computer to paper-pencil or vice-versa). Lastly, nature of the task appears have an impact on reliability of job analysis ratings as in-role task mean reliabilities were higher than extra-mean reliabilities. In sum, this study sought to explain the potential impact of the factors of accuracy, reliability, individual differences and the nature of the task on the validity of job analysis ratings. This study expands the knowledge base concerning inter-relationship among these factors and the extent to which this knowledge could lead to a model of selecting accurate and reliable job analysis subject matter experts (SMEs).