User Information Needs vs. User Effects
Mapping what users want – and what gets in the way

We learned from user research that news reports often overlook the information people actually want to know. The list that follows was developed during the modular journalism project, based on research conducted by Il Sole 24 Ore, Deutsche Welle, the Maharat Foundation, and Clwstwr (Shirish Kulkarni).
It's a working list–not definitive or particularly original–but its application in modular frameworks and agentic AI pipelines might be.
It is also immediately evident how many of these information needs align with frameworks of transparent and inclusive journalism. The Trust Project, for example, highlights several of these in its Trust Indicators, especially Journalist Expertise and Methods.
Information needs' is also a term used by Outlier Media and its focus on addressing information gaps in the community. Although we are using the term in a different context with a focus on the components of the news discourse, the goal of public interest journalism is similar, as is that of solutions journalism.
The list of user effects might also be described as bad habits in reporting–or, in some cases, as markers of unethical behavior. We chose to define them within the logic of modularity, to flag portions of text that offer users no functional value, even when they appear deliberately crafted to mislead or manipulate. Our focus isn't a philosophical beef over journalistic ethics – we're here to identify and remove effects that undermine user value.
This is, of course, just a partial list–and one that will grow and gain specificity over time.
User Information Needs
User research has shown that news reports sometimes overlook the very information people want addressed. Building on the original taxonomy from the Modular Journalism 1.0 project, we've added a second batch of categories to improve both categorization and entity recognition.
Our user information needs are organized into that taxonomy, and I'm currently grading each need on a relevance scale. This evaluation will be based on AI agents' performance in recognizing patterns across diverse news artifacts and will be incorporated into their training process.
User Effects
User Effects highlight portions of text that offer users no functional value and are an essential component of the generation pipeline. Effects are grouped by rhetorical category, and each category is characterized by specific cues.
Each user effect is paired with a counterpoint–an editorial antidote rooted in the user's informational need. These antidotes help guide rewrites that restore the story's utility and transparency, particularly in modular content workflows.
The taxonomy should be considered fluid, but it's immediately clear how this structure–its clarity and weighting–is important for prompt design.
Agents connect to both the needs and effects endpoints via the Modular API.
This taxonomy is versioned 2.1 and will evolve based on research and editorial testing.