Effects of the Accuracy and Visual Representation of Machine Learning Outputs on Human Decision Making
Abstract not provided.
Abstract not provided.
Abstract not provided.
Abstract not provided.
Abstract not provided.
Abstract not provided.
Abstract not provided.
Abstract not provided.
Abstract not provided.
Abstract not provided.
Abstract not provided.
Abstract not provided.
Abstract not provided.
Abstract not provided.
Abstract not provided.
Abstract not provided.
Abstract not provided.
Abstract not provided.
IEEE Transactions on Visualization and Computer Graphics
Recently, an approach for determining the value of a visualization was proposed, one moving beyond simple measurements of task accuracy and speed. The value equation contains components for the time savings a visualization provides, the insights and insightful questions it spurs, the overall essence of the data it conveys, and the confidence about the data and its domain it inspires. This articulation of value is purely descriptive, however, providing no actionable method of assessing a visualization's value. In this work, we create a heuristic-based evaluation methodology to accompany the value equation for assessing interactive visualizations. We refer to the methodology colloquially as ICE-T, based on an anagram of the four value components. Our approach breaks the four components down into guidelines, each of which is made up of a small set of low-level heuristics. Evaluators who have knowledge of visualization design principles then assess the visualization with respect to the heuristics. We conducted an initial trial of the methodology on three interactive visualizations of the same data set, each evaluated by 15 visualization experts. We found that the methodology showed promise, obtaining consistent ratings across the three visualizations and mirroring judgments of the utility of the visualizations by instructors of the course in which they were developed.
Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics)
International nuclear safeguards inspectors visit nuclear facilities to assess their compliance with international nonproliferation agreements. Inspectors note whether anything unusual is happening in the facility that might indicate the diversion or misuse of nuclear materials, or anything that changed since the last inspection. They must complete inspections under restrictions imposed by their hosts, regarding both their use of technology or equipment and time allotted. Moreover, because inspections are sometimes completed by different teams months apart, it is crucial that their notes accurately facilitate change detection across a delay. The current study addressed these issues by investigating how note-taking methods (e.g., digital camera, hand-written notes, or their combination) impacted memory in a delayed recall test of a complex visual array. Participants studied four arrays of abstract shapes and industrial objects using a different note-taking method for each, then returned 48–72Â h later to complete a memory test using their notes to identify objects changed (e.g., location, material, orientation). Accuracy was highest for both conditions using a camera, followed by hand-written notes alone, and all were better than having no aid. Although the camera-only condition benefitted study times, this benefit was not observed at test, suggesting drawbacks to using just a camera to aid recall. Change type interacted with note-taking method; although certain changes were overall more difficult, the note-taking method used helped mitigate these deficits in performance. Finally, elaborative hand-written notes produced better performance than simple ones, suggesting strategies for individual note-takers to maximize their efficacy in the absence of a digital aid.
Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics)
International nuclear safeguards inspectors are tasked with verifying that nuclear materials in facilities around the world are not misused or diverted from peaceful purposes. They must conduct detailed inspections in complex, information-rich environments, but there has been relatively little research into the cognitive aspects of their jobs. We posit that the speed and accuracy of the inspectors can be supported and improved by designing the materials they take into the field such that the information is optimized to meet their cognitive needs. Many in-field inspection activities involve comparing inventory or shipping records to other records or to physical items inside of a nuclear facility. The organization and presentation of the records that the inspectors bring into the field with them could have a substantial impact on the ease or difficulty of these comparison tasks. In this paper, we present a series of mock inspection activities in which we manipulated the formatting of the inspectors’ records. We used behavioral and eye tracking metrics to assess the impact of the different types of formatting on the participants’ performance on the inspection tasks. The results of these experiments show that matching the presentation of the records to the cognitive demands of the task led to substantially faster task completion.
Abstract not provided.
Abstract not provided.
Abstract not provided.
Abstract not provided.
Abstract not provided.