We live in a mediated world that is increasingly governed, judged, and served back to us by computer code. The emergence of this new data-driven era challenges the positive aspects of the participatory Web 2.0 espoused a decade earlier. Academic work in digital media, communication and cultural studies has only just started to examine the side effects of big data. Although people contribute much of the data that algorithmic systems operate upon, those systems remain largely opaque ‘black boxes’ closed off to public understanding, scrutiny and control. The technical systems and platforms that, at the beginning of the century were heralded as enabling participation, have downsides and consequences that are not yet well understood. Large data sets collecting user preferences and interactions inform the sorting and curation of digital content and news feeds on social media platforms such as Facebook and Twitter. Search results on Google and Amazon are equally shaped and ranked by these algorithmic filters.

Verbose mode is a feature available in many programming / integrated development environments that allows code to be executed step by step with human-readable explanations for the purpose of debugging or optimising code. Verbose mode provides additional details as to what the computer is doing. This IFH challenge asks, how can we increase algorithmic transparency and reveal the inner workings of AI to users? Verbose mode suggests to create new or experimental replica of existing big data and AI applications with an open bonnet, so they explicitly display and explain to users how the embedded algorithms arrive at certain decisions or search results. We suggest to focus on smart city and urban data applications such as journey planners, location-based recommender systems, or other map-based / spatial data applications.

Examples

  • A journey planner that explains why certain routes have been identified as fastest or shortest.
  • A tourist guidance system for sight seeing that explains what criteria are being analysed in order to produce recommendations.

Potential resources:

  • https://medium.freecodecamp.org/an-introduction-to-explainable-ai-and-why-we-need-it-a326417dd000
  • https://en.wikipedia.org/wiki/Explainable_Artificial_Intelligence
  • https://www.darpa.mil/program/explainable-artificial-intelligence