Trend: SEO is Dead, Long Live LLMO

With ChatGPT replacing Google Search, traditional SEO is dead: how can businesses ensure their information appears in ChatGPT’s only response? The solution is learning in context at the LLM.

It was a cold winter day. The wind howled and the snow crunched under our feet. We were locked in a cozy cafe, seeking shelter from the freezing air outside. Steam rose from my coffee and my friend’s cocoa when he touted “his” new discovery in the SEO world: ChatGPT, a revolutionary GPT-powered technology for creating realistic dialogues.

I knew this was going to be one of those calls. The kind where I have to hold my tongue.

“Imagine, with ChatGPT, I can write perfect and accurate SEO articles in no time!” he exclaimed, his eyes shining with enthusiasm. I smiled and felt a pang of dread rise in my stomach. How would he know that Google might soon follow the path of the dinosaurs, replaced by a brand new search engine powered by ChatGPT? Articles written for SEO would become moot. ChatGPT could only generate responses directly for the user and people would never see those SEO articles again. His work would become useless and he a fossil. All the time and effort he had put into improving web rankings would be wasted. All the blood, sweat and tears have flowed down the gutters of history.

Yet I remained silent, not wanting to reveal the cold, hard truth to him.

The cafe was filling up and people were coming in and out. The smell of fresh baked goods started to wash over me and make my stomach rumble. Children were playing hopscotch by the window, and the heat from the coffee made me sleepy.

Through the noise and distraction, my friend kept talking about ChatGPT, oblivious: “Yeah, that’s awesome, isn’t it? I never thought I’d get my hands on something like that It’s amazing what technology can do! I’m sure this will be the future of SEO!

I just nodded silently, but inside I couldn’t get rid of the feeling of doom. ChatGPT would really change his life. Probably not in the positive way he thought.

How SEO is dead, long live LLMO

What ChatGPT really is

After spending countless hours interacting with ChatGPT and being amazed by its funny jokes and helpful tips, many people see it as a milestone in the development of conversational AI or creative AI. However, its real importance lies in the use of large language models (LLMs) for research purposes:

By leveraging LLMs to store and retrieve large amounts of data, ChatGPT has become the most advanced search engine available today.
Although his answers may seem creative, they are actually “just” the result of interpolating and combining pre-existing information.

ChatGPT is search

ChatGPT is a search engine at its core. Just as Google indexes web pages by crawling the Internet and storing the analyzed information in a database, ChatGPT uses LLMs as a database to store large amounts of common sense knowledge from corpora.

When entering a question:

The LLM processes it with its network of encoders and transforms the input sequence into a high-dimensional representation.
The decoder network then uses this representation, along with its pre-trained weights and attention mechanism, to identify the specific factual information requested by the query and look up the LLM’s (or its neighbors’) internal representation of that knowledge. closer).
Once the relevant information has been retrieved, the decoder network uses its natural language generation capabilities to compose a response sequence indicating this fact.
This process happens in a fraction of a second, allowing ChatGPT to provide near-instant answers to a large number of questions.

ChatGPT is a modern Google search

ChatGPT can be a tough competitor to traditional search engines like Google. While traditional search engines are extractive and discriminating, ChatGPT’s search is generative and focuses on top performance, providing more personalized and user-friendly results. There are two main reasons why ChatGPT is well suited to knock Google off its throne:

ChatGPT always returns a single result to the user. Unlike traditional search engines, which optimize precision and recall for their top-K results, ChatGPT directly optimizes for top-1 performance.
ChatGPTs phrase their responses in a natural, dialogue-like tone, making them easy to understand and interact with. This sets it apart from other search engines, which often give you dry, paginated results that are hard to understand.
The future of search will be driven by its performance in the top 1, where only the first result will be relevant to users. Traditional search engines that provide endless pages of irrelevant results are overwhelming for younger generations, who quickly get bored or frustrated with the sheer volume of information.

Also, in many scenarios you really only want one outcome. Think virtual assistants or smart speakers. For these, ChatGPT’s focus on top performance is especially valuable.

ChatGPT is generative but not creative

You can think of the LLM behind ChatGPT as a Bloom filter, a probabilistic data structure used to efficiently store the information space. Bloom filters allow fast and approximate queries, but do not guarantee that the information they return is correct. For ChatGPT, this means that responses generated by LLM:

are not creative;
are not guaranteed to be factual;
To better understand this, let’s look at some illustrative examples. To keep things simple, we’ll use a set of points to represent the Large Language Model (LLM) training data. In practice, each dot would represent a natural language sentence. Using this, we can see how the LLM performs during training and Question Time:

During training, LLM builds a continuous variety based on training data. This makes it possible to explore any point of the collector. For example, if a cube represents the learned variety, the vertices of the cube would be defined by the learning points. The goal of training is to find a collector containing as much training data as possible:

During the query time, the LLM response will always be fetched from the learned collector, which is in the training data. Although the variety learned can be vast and complex, remember that LLM simply provides answers that are interpolations of training data and do not represent creativity. The ability of the LLM to sift through diversity and provide answers does not constitute creativity. True creativity lies outside the bounds of learned diversity.

With the same illustration, it is easy to see why LLM cannot guarantee the facts. The truth of the training data, represented by the vertices of the cube, does not automatically extend to all other points of the manifold. Otherwise, it is not in accordance with the principles of logical reasoning.

SEO is Dead, Long Live LLMO

{As SEO wanes, LLMO rises,

What is in-context learning?

Why does in-context learning sound like magic?

Why does in-context learning even work?

A deeper understanding of in-context learning

Known limitations of in-context learning} read full article with pictures: SEO is Dead, Long Live LLMO

Leave a Reply

Your email address will not be published. Required fields are marked *