To understand why some categorization tasks are more difficult than others, we consider five factors that may affect human performance-namely, covariance complexity, optimal accuracy level with and without internal noise, orientation of the optimal categorization rule, and class separability. We argue that covariance complexity, an information-theoretic measure of complexity, is an excellent predictor of task difficulty. We present an experiment that consists of five conditions using a simulated medical decision-making task. In the task human observers view hundreds of hypothetical patient profiles and classify each profile into Disease Category A or B. Each profile is a continuous-valued, three-dimensional stimulus consisting of three vertical bars, where each bar height represents the result of a medical test. Across the five conditions, covariance complexity was systematically manipulated. Results indicate that variation in performance is largely a function of covariance complexity and partly a function of internal noise. The remaining three factors do not explain performance results. We present a challenge to categorization theorists to design models that account for human performance as predicted by covariance complexity.