Description
User-system trust relationship plays a crucial role in supporting the effectiveness of visual analytics (VA) tools, such as guidance. Trust undergoes rapid variability in systems that deal with high uncertainty factors, such as data uncertainties and visualization literacy [1]. Designers tune their decisions to boost and maintain user trust in the system and tools. Detecting trust in users' attitudes and its variability through the analysis session can be challenging [2]. The literature demonstrates work to analyze user trust in the evaluation phase of the project. However, a real-time detection and analysis of trust is needed to calibrate visualization tools for the user and the analysis contexts [3,4,5,6]. We are interested in investigating and developing an approach to analyze trust objects and issues, i.e., a model to formalize user trust externalization [7], and methods of trust expression. We aim to use the results to support the design of VA tools and approaches such as explainability techniques [8,9].
Tasks
In this project, you will be tasked with conducting a thorough literature investigation of trust and developing a model of formalizing and externalizing trust that can be generalized across multiple fields of visualization. You will develop a framework to demonstrate the applicability of your model.
Requirements
- Knowledge of English language (source code comments and final report should be in English)
- Knowledge of Python or comparable programming languages is advantageous
- Interest in the intersection of visualization with psychosocial topics
Environment
The project should be implemented as a standalone application, desktop or web-based (to be discussed).
References
- D. Sacha, H. Senaratne, B. C. Kwon, G. Ellis and D. A. Keim, "The Role of Uncertainty, Awareness, and Trust in Visual Analytics," in IEEE Transactions on Visualization and Computer Graphics, vol. 22, no. 1, pp. 240-249, 31 Jan. 2016, doi: 10.1109/TVCG.2015.2467591.
- A. Uggirala, A.K. Gramopadhye, B.J. Melloy, J.E. Toler, "Measurement of trust in complex and dynamic systems using a quantitative approach," in International Journal of Industrial Ergonomics, 34 (3) (2004), pp. 175-186, 10.1016/j.ergon.2004.03.005.
- N. Boukhelifa, E. Lutton and A. Bezerianos, "A Case Study of Using Analytic Provenance to Reconstruct User Trust in a Guided Visual Analytics System," 2021 IEEE Workshop on TRust and EXpertise in Visual Analytics (TREX), New Orleans, LA, USA, 2021, pp. 45-51, doi: 10.1109/TREX53765.2021.00013.
- W. Dou, W. Ribarsky, and R. Chang, "Capturing Reasoning Process through User Interaction" in EuroVAST 2010: International Symposium on Visual Analytics Science and Technology, 2010,\\
doi:10.2312/PE/EuroVAST/EuroVAST10/033-038. - de Visser, E.J., Cohen, M., Freedy, A., Parasuraman, R. (2014). A Design Methodology for Trust Cue Calibration in Cognitive Agents. In: Shumaker, R., Lackey, S. (eds) Virtual, Augmented and Mixed Reality. Designing and Developing Virtual and Augmented Environments. VAMR 2014. Lecture Notes in Computer Science, vol 8525. Springer, Cham. https://doi.org/10.1007/978-3-319-07458-0\_24.
- W. Han and H. -J. Schulz, "Beyond Trust Building — Calibrating Trust in Visual Analytics," 2020 IEEE Workshop on TRust and EXpertise in Visual Analytics (TREX), Salt Lake City, UT, USA, 2020, pp. 9-15, doi: 10.1109/TREX51495.2020.00006.
- S. v. d. Elzen et al., "The Flow of Trust: A Visualization Framework to Externalize, Explore, and Explain Trust in ML Applications," in IEEE Computer Graphics and Applications, vol. 43, no. 2, pp. 78-88, 1 March-April 2023, doi: 10.1109/MCG.2023.3237286.
- T. Miller, "Explanation in artificial intelligence: Insights from the social sciences," in Artificial Intelligence, vol. 267, pp. 1-38, 2019, https://doi.org/10.1016/j.artint.2018.07.007.
- D. Wang, Q. Yang, A. Abdul, and B. Y. Lim, "Designing Theory-Driven User-Centric Explainable AI," in Proceedings of the 2019 CHI Conference on Human Factors in Computing Systems (CHI '19), pp. 1–15, 2019, https://doi.org/10.1145/3290605.3300831.