%0 Journal Article %T The Risks of Discretization: What Is Lost in (Even Good) Levels %A Christopher A. Miller %J Journal of Cognitive Engineering and Decision Making %D 2018 %R 10.1177/1555343417726254 %X In this reaction to David Kaber¡¯s article in this volume, the author points to an inherent problem in applying any ¡°levels¡± scheme to the continuous, multidimensional space of human¨Cautomation relationships and behaviors. Discretization inherently carves a continuous, analog space into discrete blocks that, the claim is, one can treat homogenously. The author provides a counterexample using a common automated e-mail filtering system as an example of how applying a single ¡°level-of-automation¡± category to the whole system (or even to information-processing stages of components within it) misrepresents and suppresses details about what the system is actually doing and how it interacts with human users. Discretization can be highly productive if it pares away confusing detail that distracts from underlying explanatory relationships, but, the author argues, not enough is known about human¨Cautomation interaction in all its variability to effectively suppress detail. Thus one needs the better models Kaber is calling for before being able to create an effective levels-of-automation scheme, not vice versa %K automation %K cognitive models %K level of automation %K human¨Cautomation interaction %U https://journals.sagepub.com/doi/full/10.1177/1555343417726254