🪳🪳🪳

Welcome to 2046: The web is gone, articles are fossils, and journalism lives inside APIs

A speculative dispatch from the future, where journalism is not made for eyes and clicks, agents are the real audience, and the hardest editorial decision is no longer what to say but what counts as verified enough to enter the system

This is how Nano Banana reads the story. So I guess we are in really good hands.
This is how Nano Banana reads the story. So I guess we are in really good hands.
Noah G. Bozzano

Twenty years ago, if you’d told me I’d end up a journalist like my father, I would’ve laughed in your face. At fourteen, in my freshman year in high school, I barely knew what a journalist did or where they went—mostly because my father rarely left the house, and the only news that broke there was our dog, Artemis Romanov von Schnitzel, trading socks for treats. I had theories on why he worked as a journalist, though: to pay the outrageous rent on our Brooklyn apartment and to get my brother and me on a flight to see our Nonna in Italy every summer. If you’d asked him, he'd grin and say it was the only thing he knew how to do.

It did not seem like a great job, to be honest. He was talking about people being laid off left and right, and the money got tighter every year for us too. I remember my father sitting on the couch with a laptop propped on his crossed legs practically every minute of every day. He was there at eleven when I said goodnight, and he was there again at six in the morning—in the same position—when I woke up for school. Whether he was already up or still up depended on the day; neither was good for him, and I could tell he was lying when I asked.

"I just got up," he'd say in the morning.

"I'm going to sleep in a minute," he'd say at night.

I shrugged. I had very little interest in the news back then; it reached me either by accident or because someone nagged me with it. Most of the time, it was my father who took on that task—usually at the dinner table over spaghetti and ragù.

"A plane from Montreal crashed into a fire truck on the runway at LaGuardia last night—did you hear?" he’d ask, curious to see if we knew. Other times, he’d volunteer something controversial: "I read in the Times that the CDC dropped flu, Covid, and hepatitis A and B from its vaccine recommendations for children."

He hoped we’d react with indignation, but to me, it sounded like outstanding news. Even though I was already six feet tall, I was certainly not fond of getting shots during yearly well-visits at the pediatrician. Less was good, wasn't it?

Still, for all his talk, my father did not actually report on crashed planes or healthcare policies—at least not from his couch. Little did I know he was actually working on killing the news in those days. Or more precisely, he worked under the assumption that news was as good as dead and that someone had to do something about it before the walking corpse stopped moving.

In 2046, there's no trace of that corpse — only journalism fossils. The last time I saw the web the way I remembered it as a kid, on a live screen, was in the mid-2030s. Then it was gone. This wasn't because someone decided to pull the plug, there just was no longer a way. New phones and laptops dropped support for the human-centric, visual interface model optimized for human eyeballs in 2032. Contact lenses, glasses, and the first generations of brain implants never supported the visual web in the first place.

"Good riddance," my father had said, texting me with his brain implant. "Look how white my hair has gotten spending my entire life on the web." He'd sent a photo from somewhere in Provence, drinking Pastis under the sycamores trees in some dusty square where people played pétanque.

What he said had some merit. The web was after all a monstrous absurdity. Information was organized with a different template for each source. Seriously, every single source had its own arbitrary taxonomy, scales of priorities, agenda and looked different too. Although UX trends and frameworks had pushed toward some degree of standardization, and even with a layer of structured data squeezed into the source code, it was still as messy as you'd expect.

The result was that endless archives of human knowledge were almost impossible to navigate. This was the case for humans, but it was especially hard for agents. Agents were the primary users of the immense volume of information on the web, and to decrypt it, they had to simulate human eyes and hands—relying on visual CSS hierarchies, complex DOM structures, and manual click-through navigation. The paradox was that by the end of the 2020s, the bulk of the content on the web was produced by agents and consumed by agents, but it was still packaged in a human-centric format.

The solution was as obvious as it was radical: stop formatting content for humans and start encoding it only for machines. By streamlining how agents talked to each other, we finally cut out the clunky middle-man. For the human users at the end of the chain, it was a win — they finally got high-quality content tailored to their specific needs, delivered through better, faster, and infinitely cheaper agents. Quality matters for all kinds of content, but it truly made a difference for journalism, an industry that had ignored what users actually wanted for decades.

And with the technology swap, our work changed too.

Journalists before 2030 poured an incredible amount of time and energy into creating content by hand, writing articles word by word. They called themselves "storytellers." Today in place of stories, we have high-fidelity reporting packages. We work as field data validators.

JSON schemas, IPTC ninjs, C2PA manifests — that's how we embed our metadata for the WebMCP. In the visual web, it all relied on basic HTML meta tags and search engine optimization keywords — signals so fickle and unstable, buried under layers of interface clutter. It's strange to think they were trying to reach humans one screen at a time, when agents can just pull what they need through an API in milliseconds.

Newsrooms held meetings and made decisions based on experience and gut feeling, even for commodity news. Today, a dashboard shows us exactly which neighborhoods and communities are underreported.

The newsroom I run in 2046 works on reporting packages that never reach the users directly but are exposed as part of an API. Reporters still go out into the world, still knock on doors, still sit through hearings and wait outside hospitals and city offices and courtrooms, but they no longer come back to “write up” what they found. They come back with verified units: observations, quotes, documents, timestamps, coordinates, images, and the chain of evidence that makes each of those things usable by a machine without making them any less human in origin. By the time something reaches our publishing layer, it has already been structured, checked, and anchored to the event it belongs to.

Alas, there are fewer journalists around (roughly 20% of what publishers needed twenty years ago), but they are all highly specialized members of the prestige / investigative corps, handsomely paid professionals who crack the tough nuts. You don't need a human for commodity coverage. Agents can sit through city council meetings, ingest the agenda packets, transcribe hearings at court, compare to prior ordinances, tag votes, and emit clean briefs. Guardian agents (the ones that work in phones, contact lenses, bathroom mirrors etc) can query local elections and verified municipal databases directly, while declarative APIs expose basic factual updates without human handling.

So most of the newsroom floor is quiet now, but not because less is happening. The loudest part of production has moved into validation. Our internal dashboards show us where we have real coverage and where we only think we do. We can see, almost in real time, which districts, agencies, neighborhoods, and communities are underreported, which beats are saturated with commodity updates, and where a missing reporter on the ground would create the biggest hole in the public record. In my father’s day, editors argued from memory and instinct. I still use both, but now I can also see the map of what we know and not know.

As managing editor, my job is no longer to polish copy into elegance. It is to set the epistemic rules of the newsroom. I decide what counts as verified enough to enter the system, which claims require a second human check before release, which events deserve a narrative treatment instead of a machine briefing, and where our reporting resources go next. I sign off on standards, not adjectives. I arbitrate disputes between speed and certainty. I review the exceptions the agents cannot resolve by themselves. And when everything works, what we publish is not just a story. It is a durable, queryable piece of public knowledge. And yes, I get paid a lot of money.

How we make sure reporting packages are not tampered with or corrupted?

Security is not a department but the condition of publication. Every asset enters our system signed at origin. Photographs are hardware-sealed. A field note, transcript, or verified fact is stamped the moment it is logged. Every edit appends itself to the provenance chain. Reporting packages are again signed at institutional release. By the time a reporting packages leaves the newsroom, it carries the full memory of its own making.

The thing is, humans almost never see our work directly. The guardian agents see it, and guardian agents are a paranoid bunch. They don't care how polished something looks or sounds, and our name alone means nothing to them. They verify the credential, inspect the edit chain, check the signature, test the permission boundary, and compare the package against the wider verification network before they allow it into a person’s information stream. If the seal is broken or the signature unknown, then it means that the claim has been potentially tampered. Any of those kills it.

There was a brief period in the late 2020s when Google downranked news produced by AI. The move was framed as a defense of human journalism against automation. It wasn’t. It was a provenance problem: the agents of that era — still primitive — could no longer reliably distinguish original reporting from its clones. The solution was not to suppress the clones, but to stop parsing the web altogether. The web was bypassed. APIs became the source.

That is why our publishing stack is built like a hostile-environment system. Access is entirely permission-based. The front end only exposes secure actions, while every back-end request requires strict authentication, role checks, and full audit trails. Semantic verification sits on top of provenance, so that even a perfectly signed lie can still be challenged by the network. The old web asked users to judge credibility after content was published. We do our best to block compromised information before it can spread.

The trick, in the end, was learning to encode journalism so that it could survive the trip. In the visual web, a great deal of value was trapped inside the article container. Once the text was scraped, summarized, or quoted out of context, the journalist expertise disappeared with it. Who verified it? What kind of statement was it? Was it a witnessed fact or an interpretation?

How we preserve journalism in reporting packages

So now we encode in layers. The reporting package carries the event and the evidence. Inside it, each meaningful statement gets its own identifier, version, status, source record, review history, and licensing terms. We do not ask an agent to guess whether a sentence is reporting, reaction, context, evaluation, or forecast. Instead we label each with precision. That is what makes decoding possible. A guardian agent can query us directly, retrieve the exact layer it needs, and still preserve the relationship between the fact, the event, the source, the rights, and the newsroom that stands behind all of it.

That is also how content keeps its value in an agent-based system. In the old economy, value leaked whenever journalism was flattened into interchangeable text. In the new one, value survives because provenance, review, and licensing remain attached at the smallest useful unit. Our work is still copied, summarized, translated, and recombined, but it is no longer orphaned when it moves. The source, the update chain, and the licensing terms all stay attached. And because agents can trace a fact back to its origin, original reporting finally has an edge over the copies that chipped away at its value for years.

What does the journalism API look like?

Whenever I mention Sannuta Raghu's name, my father says we should put her on t-shirts like David Bowie. "If she isn't a rockstar, then who is?"

She understood early that traditional journalism was too exposed to survive the agentic storm and she showed how at sentence level. The sentence carries the words, yes, but journalism also carries an invisible wrapper of judgment and context: who said it, who checked it, whether it is an observed fact or sensemaking, what role it plays in the story, which event it belongs to, what evidence supports it, and how it may be reused. Strip that away and you may still have news-sounding word salad on a screen, but not journalism.

Sannuta's News Atom, which is the base for the liquid journalism format we use today, solved that by treating the sentence as the smallest portable unit of journalistic value. Each atom can carry a knowledge frame, a semantic frame, an event frame, topical links, origin data, review history, and license terms. The atom keeps the sentence from becoming meaningless once it is detached from its article.

In practical terms, it means that our APIs return encoded knowledge, not raw content. A guardian agent can ask for every verified action tied to a municipal vote, every direct quote in a corruption inquiry, every superseded claim in a fast-moving disaster, every contextual atom necessary to explain a court ruling to a child or a pension fund or an emergency planner. And because the atom carries the journalistic role of the sentence along with its provenance and rights, the answer can be transformed without being flattened.

The tales of the dead and the living

In the United States, the old national newspapers split by function. The New York Times became a knowledge utility for the professional classes, the Wall Street Journal hardened into a data-driven tool for finance and regulation, and the Washington Post settled into a federal briefing for people in and around government. The Los Angeles Times held on as a West Coast briefing on fire, water, migration, entertainment and wealth. ProPublica stayed on to produce investigative reporting packages. The old metro papers — the Chicago Tribune, the Denver Post, the São José Mercury News, and the Hartford Courant — survived as brand names, with the reporting reduced to thin regional feeds and local records. The digital outlets that once relied on search and social media were flattened into shopping guides and lifestyle sludge.

In Canada, the change was quieter. The Globe and Mail endured as a national trust layer for money, policy, and institutional Canada. The Toronto Star fought longest for the old metropolitan ideal of journalism, then emerged smaller and built for agents. La Presse adapted early and is still a strong brand. Le Devoir survived by staying dense enough that no cheap summary could replace it. The real damage was local: smaller city papers and weeklies either became civic data services or vanished.

In Europe, the old brands did not vanish so much as harden into specific roles.

In Italy, Corriere della Sera survived by becoming an explainer for the northern professional class, while La Repubblica shifted into an intelligence service for urban liberals. Il Sole 24 Ore escaped almost unscathed by turning into a financial tool so much that by the late 2030s, it was treated less like a newspaper and more like a market instrument.

In France, Le Monde stopped being a paper in the traditional sense and became a national verification engine. Le Figaro retreated into a conservative briefing for the elite, and Le Parisien was shaved down to the bare beats of the city—transport delays, weather, crime and soccer.

In Germany, Der Spiegel made the transition — by 2040 it was an investigative archive with an agentic interface. Bild survived as an outrage feed, while the great dailies — Suddeutsche Zeitung, FAZ and Die Welt — split between executive briefings for humans and verified data for machines.

In Spain, El Pais crossed over into a global brand for democratic news, exporting political and cultural context worldwide. El Mundo got leaner and more oppositional, focusing on politics, institutional conflict, court fights, and ideological edge. The regional press shrank to produce bulletins on wildfires, soccer, drought and municipal scandal.

In Britain, it was a bloodbath. The Guardian became a moral guide for liberal professionals; the Telegraph, a polished briefing for wealth and tradition; The Times, an executive certainty layer; and the Mail ended up as a refinery for celebrity gossip and outrage. The old regional titles from Manchester to Belfast to London collapsed into ghost brands attached to traffic, court listings, shopping tips and weather warnings.

Are we better off than 20 years ago?

It's hard to know whether he's serious or joking, but my father thinks that yes, we are better off and it's not even close. It was heartbreaking to see how the work of extraordinary journalists got lost in the noise, rarely reached its readers or had any impact. And publishers had been doing such a poor service for their audience, ignoring user needs in the 2020s that the agentic transformation couldn't but improve things.

There's a method to this madness

While fictional and done purely for entertainment, this mock article has a rationale. The projections in this analysis follow a scenario model based on two variables: direct audience strength (how well an outlet reaches people without help) and content substitutability (how easily AI can replicate the work). Ten representative outlets were used to test a "post-search" environment where agents replace traditional web traffic. Revenue and cost assumptions were based on the late-2020s shift toward zero-click consumption and newsroom automation.

The resulting groups are based on where each segment sits on those axes:

  • High-direct, low-substitutability: Global brands and niche leaders who kept their pricing power.
  • Mid-direct, mixed-substitutability: Digital natives who survived by cutting costs and staying close to their readers.
  • Low-direct, high-substitutability: SEO-driven sites and general regional papers that lost both traffic and revenue.
  • Nonprofit and investigative: Organizations that stayed stable due to donor support and unique data.

These aren't specific predictions for individual companies, but an outline of the structural positions left in a system where the biggest challenge is no longer making the news, but getting it past the agents.

Everything else is for fun — for those who consider dystopia fun, that is.