I have a slightly odd hobby. I like reading old papers in computer science and then comparing them with what scientists produce today.

Take Shannon, for example.

When you read such paper or similar, you often feel a kind of compression. The ideas are dense. The author is trying to explain with as little noise as possible. Even when the mathematics is hard, there is usually a strong sense that the paper is reaching for a real principle.

Now compare that with a lot of what we see today in AI. I wonder who reads this crap about nothing in many cases.

We have more papers, more pages, more citations, more taxonomies, more polished figures, more terminology. But without any intellectual compression or even slight progress. Many papers are broad but super shallow. And it reads like some sort of SEO optimisations, for example linking to albert einstein in LLM research paper. A long bibliographies with unrelated papers.

The volume of weak and overextended “research” has exploded. They focus more on narrative scale than precision or bringing the field forward.

42 AI