Information Diets
Aemula Writer Spotlight - 8.28.25
You are what you eat. You only know what you know.
The information we consume determines the thoughts we have. It carries over into every aspect of our lives from the decisions we make, the beliefs we hold, and the relationships we build. After steadily evolving alongside social norms for hundreds of thousands of years, we have sprinted into uncharted territory with new methods of spreading information at the speed of light. As we are still left analyzing the societal effects of social media over the past decade, we are now faced with a new method of communication — large language models.
While the technology is objectively powerful and should be leveraged for all its benefits, it is important to consider the point in time LLMs are coming to prominence.
We are no longer trusting of our traditional sources of information. Trust in institutional media publications is at an all-time low. Trust in one another remains near record lows. Our preferences have shifted to forming direct relationships with trusted individuals to curate the information we receive, as seen with the prevalence of parasocial relationships formed between content creators and their audiences.
There is unmet demand for an individualized, trustworthy connection to our sources of information. LLMs slot perfectly into this void. We can hold private conversations with them directly, viewing them as an expert source on all topics of interest. However, they may not always be as expert of a source as they seem.
Notably, we should be hesitant to point to their early lack of credibility as a flaw. Those who remember the early days of Wikipedia know that it faced similar warnings of inaccuracy before becoming the trusted source it is today. In fact, it has become such a critical source of information that it is now under investigation by the House Committee on Oversight and Government Reform.
But, LLMs, much to the contrary of the marketing efforts by artificial intelligence labs, are not inherently intelligent. They are trained on human-generated data, and then rely on referencing human-generated sources to answer prompts in real time. Yet, when we look at the sources of their information, we begin to see the cracks beginning to form in their omniscient, superintelligent facade.
In a recent Semrush study of 150,000 AI citations, they found the most frequently cited source for LLMs, by far, is Reddit.
But this revelation is no secret. In an attempt to address this problem, LLM researchers are paying significant sums for access to higher-quality information produced by traditional newsrooms:
OpenAI is paying The Wall Street Journal $250m over 5 years for access to licensed content
OpenAI is paying Axel Springer $30m for 3 years of access to licensed content
Amazon is paying The New York Times $20-25m per year for 5 years of access to licensed content
Yet, if you are here, you understand the downsides of the perverse incentive structures that underly our current media ecosystem. Audience capture, editorialization, and consolidating control have resulted in the filtering of information through fewer voices, reducing nuance and access to diverse perspectives. Institutional publications have lost our trust for a reason. Using this information as the basis for LLM training will only amplify these effects, much how the internet previously amplified these effects to societal scale.
Fortunately, Aemula’s decentralized newsroom structure is built to serve as the foundation for the informational environment in a post-AI world. Individual writers will be able to selectively license their content directly to LLMs, which are able to seamlessly incorporate credible, human-generated, community-moderated information into their responses in real time. In this world, LLMs amplify the positive effects of our diverse, incentive-aligned information environment, expanding everyone’s worldviews and preserving human agency.
This week, we highlight writers speaking about the influence of LLMs on our information diets and their effects on our beliefs and habits. We encourage you to explore their work and consider subscribing directly!
The Etymology Nerd
Written by Adam Aleksic, a Harvard graduate in linguistics, content creator, contributing writer for The Washington Post, and author of Algospeak: How Social Media is Transforming the Future of Language, recently featured in our spotlight, “Curation”.
“The reason we’re talking more like Trump and ChatGPT is because our language is being bottlenecked through algorithms and large language models, which do not represent words as they naturally appear: they instead give us flattened simulacra of language. Everything that passes through the algorithm has to generate engagement; everything that passes through LLMs is similarly compressed through biased training data and reinforcement learning. As we interact with these flat versions of language, we circularly begin to incorporate them into our evaluations of how we can speak normally.”
After Babel
Written by Jonathan Haidt, a social psychologist at NYU’s Stern School of Business and author of multiple books including The Coddling of the American Mind and The Anxious Generation: How the Great Rewiring of Childhood is Causing an Epidemic of Mental Illness, along with editor and researcher, Zach Rausch, as previously featured last week in our spotlight, “break;”.
The featured post is a guest post written by Sherry Turkle, a Professor at MIT studying the emotional connections between people and technology, and author of books including Alone Together and Reclaiming Conversation.
“Chatbots know how to deliver pleasing conversations, but they work, in fact, by predicting the appropriate next words in a sentence. All they can deliver is a performance of empathy. Pretend empathy. When you tell your troubles to a machine, it has no stake in the conversation. It can’t feel pain. It has no life in which you play a part. It takes no risk; it has no commitment. When you turn away from an exchange, the chatbot doesn’t care if you cook dinner or commit suicide. Without a body or a human life cycle, it has no standing to talk about loss, love, passion, or joy.”
Astral Codex Ten
Written by Scott Alexander, whose writing first came to prominence through the blog Slate Star Codex, where Meditations on Moloch became an often referenced piece in Ethereum subculture, recently featured in our spotlight, “Controversy”.
“if a source which should be official starts acting in unofficial ways, it can take people a while to catch on. And I think some people - God help them - treat AI as the sort of thing which should be official. Science fiction tells us that AIs are smarter than us - or, if not smarter, at least perfectly rational computer beings who dwell in a world of mathematical precision. And ChatGPT is produced by OpenAI, a $300 billion company run by Silicon Valley wunderkind Sam Altman. If your drinking buddy says you’re a genius, you know he’s probably putting you on. If the perfectly rational machine spirit trained in a city-sized data center by the world’s most cutting-edge company says you’re a genius . . . maybe you’re a genius?”
Are you writing on Substack? You can easily set up automatic cross-posting with Aemula to instantly:
Increase your earnings
Expand your audience
Verifiably own your work
Plus, you will have opportunities to access community resources and grants to support the content you want to create!
Easily link your Substack to your Aemula account using this link or send a quick email to writers@aemula.com to get started!
Cross-posting comes with no costs, no obligations, and you can stop at any time.
The Aemula platform is live at aemula.com! Claim your 1-month free trial today! Learn more here!
If you want to support any of the writers we spotlight in our Substack, we highly encourage you to subscribe to their individual publications.
If you want to support independent journalism more broadly, start a subscription on Aemula!
To stay up to speed on platform updates and the writers we are adding to our community, Follow us on X or subscribe to our Substack!
Any writers you want to see featured here? Send them our way! We are always searching for great new publications.








