Algorithmic Literacy : A Compass to Successfully Navigate the Algorithm-Driven World?

Determined by algorithms, i.e., a set of rules to be followed in calculations, computers have altered how we tackle and organize various activities across many domains. Initially applied to simple tasks premised on human control, today’s algorithmic advances take over management (Möhlmann et al., 2021) and mimic human intelligence processes (Benbya et al., 2024). Since there is hardly a sphere untouched by algorithms, the public must become equipped with the necessary tools and knowledge to use them comfortably, innovatively, responsibly, effectively, and ethically.
New algorithmic tools are continuously coming onto the market. Their user-friendly interfaces offer a low entry barrier for professionals and the broader public. The simplified access to such tools paves the way for widespread intended use, supporting domains such as finance (Strich et al., 2021), education (Chen, 2022), academic research (Sarker et al., 2024), chemistry (Lou and Wu, 2021), healthcare (Jussupow et al., 2021; Abdel-Karim et al., 2023), and social media (Salge et al., 2022). Next to benign forms, harmful and even malicious ways of algorithm use are also on the rise. For example, deep learning that makes up images, audio, and fake events (i.e., deepfakes) (Vasist and Krishnan, 2022) allow putting new words in a politician’s mouth, starring in a favorite movie, or dancing like a pro. Artificially created content, initially recognizable by imperfections, became almost indistinguishable as technology advanced. At the same time, public awareness of algorithmic systems is surprisingly low: Only 62% have seen, read, or heard anything about algorithmic systems, and the majority self-report a low understanding of how those systems work (Curtis et al., 2023). When presented with common applications, many were unaware that the technology relied on algorithms (ibid). To successfully navigate our data- and algorithm-driven society, including an ever-confusing digital sphere, algorithmic literacy has been identified as an essential skill (Burton et al., 2020). Generally, algorithmic literacy can be seen as the capabilities of humans to understand and effectively interact with algorithmic systems (Benbya et al., 2024). The IS and related fields of cognitive psychology, media, and marketing started thinking of and developing measures of how familiar and confident people are in handling algorithms in their everyday and professional lives. However, there are many open questions we want to address in this panel.

First, the concept of algorithmic literacy is unclear. Exemplary scales include: algorithmic literacy (Oeldorf-Hirsch and Neubaum, 2023; Dogruel et al., 2022), AI literacy (Pinski and Benlian, 2023; Wang et al., 2022), data literacy (Lefebvre and Legner, 2024), statistical literacy (Callingham and Watson, 2005), computational literacy (Tsai et al., 2021), IT literacy (Basselier and Benbasat, 2003; Basselier et al., 2004), technology readiness index (Parasuraman, 2000), numeracy (Lipkus et al., 2021), and an ever-confusing longer list of similar terms. However, a closer examination of these measurements shows how the same constructs have different names, and the same labels denote different constructs. Therefore, the first aim of this panel is to articulate the concept of algorithmic literacy, disentangling it from related concepts. We anticipate acknowledging and agreeing on the existence of a multitude of algorithmic literacies (depending on the application area) and giving directions for developing a taxonomy.

Second, it remains to be seen how to assess algorithmic literacy properly. The research started first with the development of subjective and simple objective measures (Dogruel et al., 2022; Wang et al., 2022). However, subjective scales are prone to over- or underestimation (Mabe and West, 1982). This raises the question of whether we need additional objective measures of an individual’s proficiency in algorithmically supported work. Especially in high-stakes decisions like medical diagnosis (Ahsen et al., 2019) or criminal justice, one cannot reasonably rely on a workforce whose skills underwent only subjective assessments. Would external independent institutions certifying levels of algorithmic literacies be a solution similar to driving licenses, foreign language proficiencies (TOEFL, Cambridge), and mathematical skills (GRE, GMAT)?

Third, questions arise about the education and training of algorithmic literacy. How should we prepare the broader public, future managers, and workers to effectively collaborate with algorithmic systems? For example, how can we prevent accepting poor algorithmic advice or neglecting reasonable algorithms (Dietvorst et al., 2015)? What is the minimum level of competencies for people working in algorithm-human teams? If we could measure algorithm literacy reliably, we would be able to develop such education and training programs. They would include teaching individuals how to critically evaluate algorithms they encounter in their daily lives, as well as providing them with an understanding of the broader social, ethical, and political issues associated with algorithmic decision-making.

Focus of the Panel

Through this panel, we seek to debate a definition and measurement of algorithmic literacy from an IS perspective. We are certain this will spark a strong interest among the IS community and help initiate progress in this area. Questions that this panel aims to explore include:

  • How can we define algorithmic literacy? How to disentangle it from related concepts? Is there a multitude of algorithmic literacies?
  • How can we measure algorithmic literacy? Are rather objective or subjective scales suitable?
  • How do we enable algorithmic literacy? How to educate whom and at what age? How can we assess whether algorithmic literacy has been achieved?
  • What is the minimum of skills required? Should the level of proficiency be contingent on the level of responsibility?
  • Do we need an external certification institution similar to standardized language and math tests?

Panel Structure

The panel is planned as an on-site, 90-minute event. Two moderators and five panelists with expertise in algorithmic literacy and related areas form the core of the program. First, the moderators will summarize the panel’s motivation and purpose. Each panelist will then be briefly introduced and make their initial statements on the discussion points, according to their expertise and the most pressing issues. Given the intention to have an active exchange between the panelists and the audience, the remaining time will be devoted to an interactive format where questions from the audience are strongly encouraged. In the final stage of the panel, the moderators will summarize the panel’s main takeaways.

Panelists

  • Jussupow, Ekaterina, Technische Universität Darmstadt, Germany
  • Legner, Christine, University of Lausanne, Switzerland
  • Meythaler, Antonia, Weizenbaum Institute for the Networked Society, Berlin and Universität Potsdam, Germany
  • Müller, Oliver, Universität Paderborn, Germany
  • Pinski, Marc Technische Universität Darmstadt, Germany

Moderators

  • Abramova, Olga, Leuphana University Lüneburg, Germany
  • Heimbach, Irina, WHU – Otto Beisheim School of Management, Germany

Organisers

References

Abdel-Karim, B. M., Pfeuffer, N., Carl, K. V. and Hinz, O. (2023). “How AI-based systems can induce reflections: The case of AI-augmented diagnostic work,” MIS Quarterly 47 (4), 1395-1424.

Ahsen, M. E., Ayvaci, M. U. S. and Raghunathan, S. (2019). “When algorithmic predictions use human-generated data: A bias-aware classification algorithm for breast cancer diagnosis,” Information Systems Research 30 (1), 97-116.

Bassellier, G. and Benbasat, I. (2004). “Business competence of information technology professionals: Conceptual development and influence on IT–business partnerships,” MIS Quarterly 28 (4), 673-694.

Bassellier, G., Benbasat, I. and Reich, B. H. (2003). “The influence of business managers’ IT competence on championing IT,” Information Systems Research 14 (4), 317-336.

Benbya, H., Strich, F. and Tamm, T. (2024). “Navigating generative artificial intelligence promises and perils for knowledge and creative work,” Journal of the Association for Information Systems 25 (1), 23-36.

Burton, J. W., Stein, M. K. and Jensen, T. B. (2020). “A systematic review of algorithm aversion in augmented decision making,” Journal of Behavioral Decision Making 33 (2), 220-239.

Callingham, R. and Watson, J. M. (2005). “Measuring statistical literacy,” Journal of Applied Measurement 6 (1), 19-47.

Chen, L. (2022). “Current and future artificial intelligence (AI) curriculum in business school: A text mining analysis,” Journal of Information Systems Education 33 (4), 416-426.

Curtis, C., Gillespie, N. and Lockey, S. (2023). “AI-deploying organizations are key to addressing ‘perfect storm’ of AI risks,” AI and Ethics 3 (1), 145-153.

Dietvorst, B. J., Simmons, J. P. and Massey, C. (2015). “Algorithm aversion: People erroneously avoid algorithms after seeing them err,” Journal of Experimental Psychology: General 144(1), 114.

Fügener, A., Grahl, J., Gupta, A. and Ketter, W. (2021). “Will humans-in-the-loop become borgs? Merits and pitfalls of working with AI,“. MIS Quarterly 45 (3), 1527-1557.

Jussupow, E., Spohrer, K., Heinzl, A. and Gawlitza, J. (2021). “Augmenting medical diagnosis decisions? An investigation into physicians’ decision making process with artificial intelligence,” Information Systems Research 32 (3), 713-735.

Kim, C. S. and Keith, N. K. (1994) “Computer literacy topics: A comparison of views within a business school,” Journal of Information Systems Education 6 (2), 55-59.

Lipkus, I., Samsa, G. and Rimer, B. (2021). “General performance on a numeracy scale among highly educated samples,” Medical Decision Making 21(1), 37-44.

Lou, B. and Wu, L. (2021). “AI on drugs: Can artificial intelligence accelerate drug development? Evidence from a large-scale examination of bio-pharma firms,” MISQ 45 (3), 1451-1482.

Mabe, P. A. and West, S. G. (1982). “Validity of self-evaluation of ability: A review and meta-analysis,” Journal of applied Psychology 67 (3), 280-296.

Möhlmann, M., Zalmanson, L., Henfridsson, O. and Gregory, R. W. (2021). “Algorithmic management of work on online labor platforms: When matching meets control,” MIS Quarterly 45 (4), 1999-2022.

Oeldorf-Hirsch, A. and Neubaum, G. (2021). “What do we know about algorithmic literacy? The status quo and a research agenda for a growing field,” New Media & Society.

Parasuraman, A. (2000). “Technology Readiness Index (TRI) a multiple-item scale to measure readiness to embrace new technologies,” Journal of Service Research 2 (4), 307-320.

Salge, C. A. D. L., Karahanna, E. and Thatcher, J. B. (2022). “Algorithmic processes of social alertness and social transmission: How bots disseminate information on Twitter,” MIS Quarterly 46 (1), 229-259.

Sarker, S., Susarla, A., Gopal, R. and Thatcher, J. B. (2024). “Democratizing knowledge creation through human-AI collaboration in academic peer review,” Journal of the Association for Information Systems 25 (1), 158-171.

Strich, F., Mayer, A. S. and Fiedler, M. (2021). “What do I do in a world of artificial intelligence? Investigating the impact of substitutive decision-making AI systems on employees’ professional role identity,” Journal of the Association for Information Systems 22 (2), 304-324.

Tsai, M.-J., Liang, J.-C. and Hsu, C.-Y. (2021). “The computational thinking scale for computer literacy education,” Journal of Educational Computing Research 59 (4), 579-602.

Vasist, P. and Krishnan, S. (2022). “Deepfakes: An integrative review of the literature and an agenda for future research,” Communications of the Association for Information Systems 51 (14), 590-636.

Wang, B., Rau, P.-L. P. and Yuan, T. (2022). “Measuring user competence in using artificial intelligence: validity and reliability of artificial intelligence literacy scale,” Behaviour & Information Technology 42 (3), 1-14.