Year
2026
Students
Lucas Munkeberg Brenne, Amalie A. Larsen
Project
Seeing Through Synthetic Media
Tagged
User agency , ai, transparency, visual communication
undefined
undefined

This project investigates how visual labelling can clarify the origins of synthetic media. As generative AI becomes increasingly integrated into daily life, we’ve built a system that communicates three key factors:

  • Origin: Distinguishing between human-captured, AI-modified, or entirely AI-generated content.
  • Transformation: Indicating the scale of change, from minor edits to full replacements.
  • Agency: Highlighting the level of human oversight versus automated production.

Context and Relevance

Current frameworks for AI disclosure are fragmented and inconsistent. This unpredictability creates a transparency gap, making it difficult for users to judge the authenticity of what they see. Our project addresses this by creating a standardised visual language that bridges the gap between production and interpretation.

Proposed System

We have developed a tiered labelling system that adapts its form based on the content's complexity. By using shape and structure to represent data, the labels provide immediate information at a glance for casual users, while offering deeper technical layers for those seeking more detail. This flexible system is designed to function across various platforms, from news media to social feeds.

Process and Methods

The system is built on qualitative research and interviews with stakeholders from NRK, FINN, Opera, Bakken & Bæck, SIFO, and others. Through workshops and user testing, we explored how different audiences perceive synthetic media. Developed with guidance from Comte Bureau, the project builds upon their proof-of-concept project KI-merket (kimerket.no).

Outcome

The project demonstrates that form-based labelling moves beyond binary "AI vs. Human" tags to provide a nuanced map of digital intent. This system provides a scalable framework that enables users to better evaluate the credibility of the content they consume. By shifting from static warnings to an adaptive visual language, this project proposes a standard for content transparency in an AI-driven information landscape.

Collaborators: Bjørn Ravlo-Leira og Herman Freng Billett (Comte Bureau)