We’re instructed to “suppose totally different,” to coin new phrases, to pioneer concepts nobody’s heard earlier than and share our thought management.
However within the age of AI-driven search, originality isn’t the boon we expect it’s. It would even be a legal responsibility… or, at greatest, a protracted sport with no ensures.
As a result of right here’s the uncomfortable reality: LLMs don’t reward firsts. They reward consensus.
If a number of sources don’t already again a brand new thought, it could as properly not exist. You possibly can coin an idea, publish it, even rank #1 for it in Google… and nonetheless be invisible to giant language fashions. Till others echo it, rephrase it, and unfold it, your originality received’t matter.
In a world the place AI summarizes relatively than explores, originality wants a crowd earlier than it earns a quotation.
I didn’t deliberately got down to check how LLMs deal with authentic concepts, however curiosity struck late one evening, and I ended up doing simply that.
Whereas writing a publish about multilingual web optimization, I coined a brand new framework — one thing we referred to as the Ahrefs Multilingual web optimization Matrix.
It’s a net-new idea designed so as to add info acquire to the article. We handled it as a chunk of thought management that has the potential to form how individuals take into consideration the subject in future. We additionally created a customized desk and picture of the matrix.
Right here’s what it appears to be like like:
The article ranked first for “multilingual web optimization matrix”. The picture confirmed up in Google’s AI Overview. We have been cited, linked, and visually featured — precisely the type of web optimization efficiency you’d anticipate from authentic, helpful content material (particularly when looking for an actual match key phrase).
However, the AI-generated textual content response hallucinated a definition and went off-tangent as a result of it used different sources that discuss extra typically concerning the dad or mum matter, multilingual web optimization.
Advice
Following my curiosity, I then prompted numerous LLMs, together with ChatGPT (4o), GPT Search, and Perplexity, to see how a lot visibility this authentic idea may truly get.
The final sample I noticed is that each one LLMs:
- Had entry to the article and picture
- Had the capability to quote it of their responses
- Included the precise time period a number of occasions in responses
- Hallucinated a definition from generic info
- By no means talked about my identify or Ahrefs, aka the creators
- When re-prompted, would often give us zero visibility
Total, it felt academically dishonest. Like our content material was accurately cited within the footnotes (generally), however the authentic time period we’d coined was repeated in responses whereas paraphrasing different, unrelated sources (nearly all the time).
It additionally felt just like the idea was absorbed into the final definition of “multilingual web optimization”.
That second is what sparked the epiphany: LLMs don’t reward originality. They flatten it.
This wasn’t a rigorous experiment — extra like a curious follow-up. Particularly since I made some errors within the authentic publish that doubtless made it tough for LLMs to latch onto an specific definition.
Nevertheless, it uncovered one thing fascinating that made me rethink how straightforward it may be to earn mentions in LLM responses. It’s what I consider as “LLM flattening”.
LLM flattening is what occurs when giant language fashions bypass nuance, originality, and revolutionary insights in favor of simplified, consensus-based summaries. In doing so, they compress distinct voices and new concepts into the most secure, most statistically strengthened model of a matter.
This could occur at a micro and macro stage.
Micro LLM flattening
Micro LLM flattening happens at a subject stage the place LLMs reshape and synthesize data of their responses to suit the consensus or most authoritative sample about that matter.
There are edge instances the place this doesn’t happen, and naturally, you’ll be able to immediate LLMs for extra nuanced responses.
Nevertheless, given what we learn about how LLMs work, they’ll doubtless proceed to wrestle to attach an idea with a definite supply precisely. OpenAI explains this utilizing the instance of a instructor who is aware of so much about their material however can not precisely recall the place they discovered every distinct piece of knowledge.
So, in lots of instances, new concepts are merely absorbed into the LLM’s basic pool of information.
Since LLMs work semantically (based mostly on that means, not precise phrase matches), even when you seek for an actual idea (as I did for “multilingual web optimization matrix”), they’ll wrestle to attach that idea to a selected particular person or model that originated it.
That’s why authentic concepts are likely to both be smoothed out in order that they match into the consensus a couple of matter or not included at all.
Macro LLM flattening
Macro LLM flattening can happen over time as new concepts wrestle to floor in LLM responses, “flattening” our publicity to innovation and explorations of latest concepts a couple of matter.
This idea applies throughout the board, masking all new concepts individuals create and share. Due to the flattening that may happen at a subject stage, it signifies that LLMs may floor fewer new concepts over time, trending in the direction of repeating probably the most dominant info or viewpoints a couple of matter.
This occurs not as a result of new concepts cease accumulating however relatively as a result of LLMs re-write and summarize data, usually hallucinating their responses.
In that course of, they’ve the potential to form our publicity to data in methods different applied sciences (like serps) can not.
Because the visibility of authentic concepts or new ideas flattens out, which means many more recent or smaller creators and types could wrestle to be seen in LLM responses.
The pre-LLM established order was how Google surfaced info.
Usually, if the content material was in Google’s index, you might see it in search outcomes immediately anytime you looked for it. Particularly when looking for a novel phrase solely your content material used.
Your model’s itemizing in search outcomes would show the elements of your content material that match the question verbatim:
That’s because of the “lexical” a part of Google’s search engine that also works based mostly on matching phrase strings.
However now, even when an thought is appropriate, even when it’s helpful, even when it ranks #1 in search — if it hasn’t been repeated sufficient throughout sources, LLMs usually received’t floor it. It could additionally not seem in Google’s AI Overviews regardless of rating #1 organically.
Even when you seek for a novel time period solely your content material makes use of, as I did for the “multilingual web optimization matrix”, generally your content material will present up in AI responses, and different occasions it received’t.
LLMs don’t attribute. They don’t hint data again to its origin. They only summarize what’s already been mentioned, once more and once more.
That’s what flattening does:
- It rounds off originality
- It plateaus discoverability
- It makes innovation invisible
That isn’t an information concern. It’s a sample concern that skews towards consensus for many queries, even these the place consensus makes no-sensus.
LLMs don’t match phrase strings; they match that means, and that means is inferred from repetition.
That makes originality tougher to seek out, and simpler to neglect.
And if fewer authentic concepts get surfaced, fewer individuals repeat them. Which suggests fewer possibilities for LLMs to find them and choose them up sooner or later.
LLMs seem to know all, however aren’t all-knowing. They’re confidently flawed so much.
One of many greatest criticisms of AI-generated responses is that they’re usually fully inaccurate… properly, because of this. In the event that they’re incapable of attributing an authentic idea to its creator, how else are they to calculate the place else their interpretation of their data is flawed?
Giant language fashions will more and more have entry to every thing. However that doesn’t imply they perceive every thing.
They accumulate data, they don’t query it.
They collapse nuance into narrative.
They usually deal with repetition as reality.
And right here’s what’s new: they are saying all of it with confidence. LLMs possess no capability for reasoning (but) or judgment. However they really feel like they do and can outright, confidently let you know they do.
Working example, ChatGPT being a pal and reinforcing this idea that LLMs simulate judgment convincingly:
How meta is it that regardless of having no possible way of figuring out this stuff about itself, ChatGPT convincingly responded as if it does, in truth, know?
Not like serps, which act as maps, LLMs current solutions.
They don’t simply retrieve info, they synthesize it into fluent, authoritative-sounding prose. However that fluency is an phantasm of judgment. The mannequin isn’t weighing concepts. It isn’t evaluating originality.
It’s simply pattern-matching, repeating the form of what’s already been mentioned.
And not using a sample to anchor a brand new thought, LLMs don’t know what to do with it, or the place to put it within the cloth of humanity’s collective data.
This isn’t a brand new drawback. We’ve all the time struggled with how info is filtered, surfaced, and distributed. However that is the primary time these limitations have been disguised so properly.
So, what can we do with all of this? If originality isn’t rewarded till it’s repeated, and credit score fades as soon as it turns into a part of the consensus, what’s the technique?
It’s a query value asking, particularly as we rethink what visibility truly appears to be like like within the AI-first search panorama.
Some sensible shifts value contemplating as we transfer ahead:
- Label your concepts clearly: Give them a reputation. Make them straightforward to reference and search. If it feels like one thing individuals can repeat, they may.
- Add your model: Together with your model as a part of the thought’s label helps you earn credit score when others point out the thought. The extra your model will get repeated alongside the thought, the upper the prospect LLMs may even point out your model.
- Outline your concepts explicitly: Add a “What’s [your concept]?” part instantly in your content material. Spell it out in plain language. Make it legible to each readers and machines.
- Self-reference with function: Don’t simply drop the time period in a picture caption or alt textual content — use it in your physique copy, in headings, in inner hyperlinks. Make it apparent you’re the origin.
- Distribute it extensively: Don’t depend on one weblog publish. Repost to LinkedIn. Discuss it on podcasts. Drop it into newsletters. Give the thought a couple of place to dwell so others can discuss it too.
- Invite others in: Ask collaborators, colleagues, or your group to say the thought in their very own work. Visibility takes a community. Talking of which, be happy to share the concepts of “LLM flattening” and the “Multilingual web optimization Matrix” with anybody, anytime 😉
- Play the lengthy sport: If originality has a spot in AI search, it’s as a seed, not a shortcut. Assume it’ll take time, and deal with early traction as bonus, not baseline.
And at last, determine what sort of recognition issues to you.
Not each thought must be cited to be influential. Typically, the most important win is watching your considering form the dialog, even when your identify by no means seems beside it.
Last ideas
Originality nonetheless issues, simply not in the best way we have been taught.
It’s not a progress hack. It’s not a assured differentiator. It’s not even sufficient to get you cited these days.
However it’s how consensus begins. It’s the second earlier than the sample varieties. The spark that (if repeated sufficient) turns into the sign LLMs finally study to belief.
So, create the brand new thought anyway.
Simply don’t anticipate it to talk for itself. Not on this present search panorama.