Glenn Beck is right to sound the alarm: a new wave of research and commentary shows that the cheap, viral junk we gorge on online doesn’t just rot our souls — it corrodes the intellect of the machines we build and the citizens we raise. Beck lays out a stark picture of “brain rot” for both people and large language models, arguing that endless scrolling and feeding AI with low-quality, attention-grabbing content produces measurable declines in reasoning, memory, and even moral tone.
Hard science is catching up with the intuition. Recent machine‑learning work shows that when models are trained on successive generations of synthetic or low-quality content that swamp out real, curated human data, the models can suffer a kind of collapse: reasoning scores fall, long-context abilities degrade, and performance drifts in worrying ways. This is not just a metaphor — rigorous experiments documented at major conferences find that a “replace” workflow that uses only synthetic, viral data produces degradation that can be persistent.
Researchers are also warning that the poisonous material becomes wired into models in ways that are hard to excise later. Work presented at top conferences describes how memorized, low-quality sequences can become mechanistically entangled with a model’s general abilities, making simple “detox” retraining insufficient to restore baseline capacities. In plain English: once you let the swamp water into the machine, it’s much harder to get the swamp out.
Meanwhile the cultural side of this crisis is undeniable — Americans are spending hours a day scrolling through feed-first content that prizes clicks over truth. The global average daily time on social media is well over two hours, with reports in 2025 putting the figure around 141 minutes a day, and that steady diet of shallow, viral material is exactly the junk load now seeping into AI training pools. If we keep substituting hard thought with dopamine hits, both our children and our algorithms pay the price.
This is a political and moral problem as much as a technical one. Big Tech’s business model rewards engagement above all else, enriching executives and shareholders while hollowing out civic virtue and disciplined attention among everyday Americans. Conservatives should demand better: transparency about what data trains our public systems, limits on flooding the web with AI‑generated low-quality material, and support for institutions that cultivate depth — schools that teach critical thinking, libraries that preserve real books, and communities that prize conversation over consumption.
Policymakers can act now to avoid a worse outcome. The same research that documents collapse also points to solutions: models remain stable when training workflows keep real, high-quality human data in the mix rather than replacing it wholesale. That means sensible rules to protect original content, stronger labeling of AI‑generated material, and incentives for platforms to promote verified, substantive sources over clickbait. These are practical, pro-freedom measures that protect speech by preserving meaning.
If we fail to reclaim our attention, the consequences will be more than academic. A nation that stops thinking deeply stops caring deeply and becomes easier to manipulate — by algorithms, advertisers, and by the establishment elites who benefit from an obedient, distracted populace. We can choose a different path: turn off the autopilot, read real books, revive neighborhood institutions, and elect leaders who understand that freedom depends on the cultivation of virtue and intellect, not on the endless churn of viral slop.

