A brief history of search engines, Google Search and SEO.

A Brief History of SEO & Search

The history of SEO and search engines is a long and complex one. Today, it is strange to envision a time where Google was not the dominant search engine (although that time didn’t last very long.) In this blog, I will try to briefly outline the history of search engines and how people have attempted to manipulate their website rankings since the beginning of time! Or at least since the beginning of the world wide web.

Anyhow, let’s get into the brief history of SEO & Search:

The 90s

The first ever internet search engine was founded in 1990 as Archie. This search engine laid the foundations for searching the web with the core principles that are still used today: sorting results based on the keywords found within the content of the page.

The decade saw quite a few new search engines come to light. During this time the method for finding new pages and websites was not as cemented as it is today. Some search engines relied on users submitting their websites and pages to be index, whereas some others would have their own bots to crawl the web, however these bots were very unreliable and slow: it was common for a search engine crawler to only crawl one single server at a time.

The Google.com domain was first registered in 1997 and by 1998 Google was officially launched, with Sergey Brin and Lawrence Page publishing a paper in Stanford: “The Anatomy of a Large-Scale Hypertextual Web Search Engine.” This is where they laid out their vision for Google as a search engine and first mentioned “PageRank” – the formula/technology/algorithm which was to be used in a way to measure the quality of a website page. Before PageRank, search engines did not have a documented way of ranking results and it was purely based on how many times the exact search query was mentioned in the page content.

The end of the 90s would see three major search engines stake their claims: Google, Yahoo and AOL. It’s actually impossible to pinpoint an exact year where Google became the clear dominant search engine, but it is generally seen as 2002-2004. By 2000, Yahoo’s search results were actually powered by Google. By the time Yahoo decided to actually create their own search engine & serve their own results in 2004, the phrase “google it” had already become commonplace in the English language.

The rest of this blog will be dedicated to how Google has changed and evolved ever since.

Interesting fact: Brin and Page tried to sell Google to the rival search engine Excite in 1999.
First for $1 million dollars and then again for $750,000 dollars, but Excite CEO George Bell rejected both offers.


The early 200s is where Google became the most popular search engine. By this time, Google realised people were out to manipulate the search results because they knew there was money to be made on the web. Google provided the now famous Webmaster Guidelines in an attempt to document good SEO practices from the bad ones (see whitehat vs blackhat.)

These guidelines would, put simply, not be followed by webmasters. This is because Google’s ranking systems at the time actually didn’t reward people for following these guidelines. The search engine was still all about how many times the page mentioned the search query, and how many backlinks the website & page had. Ranking a website in Google during this time was purely about the quantity of keywords and backlinks – something Google would spend the next 20 years trying to fix.


2003 saw the “Florida” search update. This is the first major search update by Google and was the first attempt at punishing blackhat SEO. The Florida update targeted over optimised websites, demoting pages which involved in hefty keyword stuffing.


In 2004, Google decided to try their hand at voice search! Today, we can simply yell out to a machine, ask it a question and it will give us the result – it was not so easy in 2004 though. If you’d like to perform a Google voice search back in 2004, you’d have to follow these instructions:

  • Pick up the phone and call the automated voice search system (650-318-0165)
  • After the prompt, say your search query.
  • Click a link on the Google page and a new window would open with the voice search results.
  • To perform a new search, just say another query and the window would actually be updated with the new results.

It’s quite a long-winded process and didn’t catch on at all, but it’s still interesting to see how far back Google tried to incorporate voice search.


2005 saw a very big change in SEO, and it wasn’t just from Google. In 2005, Google teamed up with Yahoo and Microsoft to introduce a new HTML attribute known as “nofollow.” This is an attribute for hyperlinks and it is used to tell search engines “hey, don’t follow this link and don’t use it for your rankings!” From a user perspective, a link and a nofollow link works exactly the same. From a search engine perspective, a nofollow link is just as if the link doesn’t exist: the search engine won’t use the link to discover the page and it won’t use the link when ranking the page. Today, it is not so clear cut: nofollow is now more of a hint than a directive as to whether or not to follow/use the link for rankings.

Another big thing in 2005 was the launch of Google Analytics. This is still used today to measure a website’s traffic, engagement, conversions etc.

And finally, 2005 is when Google first started to serve personalised search results. This is rankings would incorporate a user’s search and browsing history: websites the user frequented would (sometimes) outrank websites the user has never visited before.


Bing was first unveiled and ready for use in 2009. Microsoft was very confident in its new search engine: it actually first marketed Bing as the Google-killer, directly rivaling Google in their adverts. As we know, this never happened, Not only that, but Bing failed to bring anything new to the search engine market. In fact it goes in the opposite direction, Bing is seen as a more traditional search engine due to its higher importance on keyword frequency, keyword in URLs/domain names and favouring capitalised/bold keywords: all things that are considered borderline black hat SEO in Google.

In addition to Bing, Google also unveiled something new: Caffeine. The Caffeine Update is one of the biggest changes to Google to-date: and it had nothing to do with rankings! Caffeine is all about how Google could crawl and index the web, and was an attempt to crawl the web more frequently, grow their index at a faster rate to reflect the growing web, and to provide fresher search results.

Caffeine wasn’t a tweak to Google’s indexing system, it was a complete rebuild. Google actually allowed professional SEOs and webmasters to preview the system in 2009 and waited a full year to release it in 2010: the stakes were clearly high in Google’s eyes. Today, Caffeine has standed the test of time and remains the way Google indexes the web in 2021.


In 2010, Google demonstrated and launched a new way of searching known as Google Instant. Google Instant looked and operated the same way as normal Google, except the search results page would actually show up as soon as you start typing – no longer requiring the hit of the enter key.

Google Instant represented a key shift in the focus of Google: as a company and search engine, Google recognised the importance of user experience, and in-turn realised how important speed is for online users.

Google Instant is no longer usable but you can watch a video of the demonstration here: https://www.youtube.com/watch?v=WEkwdB6afvo


2011 saw one of the most volatile updates for Google Search, and represented the start of a new era for Google, an era dedicated to finding new methods of demoting websites that continued to rank off the back of blackhat SEO tactics.

The initial strike was in 2011 with the Panda update. Panda was an update which targeted content farms: a content farm is a huge website with lots of low quality content pages. These pages are aimed at bringing in as much traffic as possible, with plenty of ads being served on the pages. The aim is to make money by serving ads: many display adverts on the web will serve by a pay-per-view method, so more traffic = more ad views = more money. Panda would demote or even deindex these content farms.

Google Search updates were not as they are today and had to be frequently updated and refreshed. For example, the Panda update was refreshed at least 28 times between 2011 and 2015 before it was finally incorporated into the core algorithm, therefore becoming a real-time ranking factor.


The next strike against blackhat SEO was in 2012 with the infamous Penguin update. Although PageRank had come along way since the early days, the fact is that backlinking for SEO was still so easy to spam and rank with: this all changed with Penguin. This update targeted websites that benefitted from manipulated link building, spammy links, bad anchor texts etc which were all quite common practices for SEO before 2012, because it worked and had worked for a very long time.

Just like Panda, Penguin was refreshed various times before being incorporated into the core algorithm.

In addition to Penguin, 2012 also saw the release of Knowledge Panels. These are areas in the search results page which provides additional information about a search query. These can usually be found on the right hand side of the SERP – usually it is generated by content on a trusted authority website such as Wikipedia.


In 2013, Google launched the Hummingbird update. Unlike Panda and Penguin which were add-ons to the existing Google core algorithm, Hummingbird was more considered of a complete overhaul of the core algorithm.

Hummingbird was not about targeting spammy websites or dishing out more penalties like the previous 2 major updates. It was all about improving Google’s understanding of a searcher’s intent and the knowledge graph (how things can relate to each other based on semantics.) Here’s an example:

If you search Google for [best chinese] the SERP will be populated with results about Chinese food, Chinese restaurants, takeaways etc. But how does Google know you’re searching about food when you never mentioned food? This is a great example of machine learning in Google to understand search intent. If you search [best chinese] in Google you are undoubtedly searching about Chinese food. Now think about how many other users are searching for that same thing and are clicking on results about Chinese food – eventually through machine learning the algorithm has learnt that people searching for [best chinese] are looking for results about Chinese food, which is we now see the SERP page is filled with Chinese food results.

Hummingbird is the first major example of Google moving away from the infamous keywords by training its algorithm to better understand search intent to understand what a user wants to see without actually telling Google.

Interesting fact: The actual process of machine learning to teach the Google algorithms about search intent is known as RankBrain.


In 2015, Google showed its flexibility by responding to the market insights. Since 2010, the share of Google searches being made by mobile phones were starting to sky rocket, with the trend clearly showing that eventually mobile would become the majority device for Google searches. However, the web as a whole was not quite as ready to accommodate these mobile users: pre-2015 it was so common to find websites create separate pages or even dedicated subdomains for mobile users. This created a bad user experience as mobile users were being served with different content to desktop users, in most cases less content. This proved a dilemma for search engines like Google who rely on crawling this content in order to rank the page – how can it rank a page based on its content when a mobile searcher wouldn’t be able to access such content? In 2015, Google began to roll out mobile-first indexing.

Essentially, Google simply told webmasters that Googlebot would eventually start crawling the web as mobile-only: so if your page hid content from mobile users, this content would also be hidden for Google and therefore not indexable. The only reason why mobilegeddon and mobile-first indexing are seen as such big updates is because so much of the web failed to accommodate mobile users.

Today, responsive web design is almost a requirement for creating a website in 2021. The trend seen pre-2015 came to fruition and by Quarter 3 of 2016, the majority of Google searches were being conducted by a mobile device (51%.) As a result of these mobile-focused updates, Google can rightly be seen as a massive catalyst for the popularity of responsive web design and the reason why mobile users are able to view the exact same content as a desktop user.


In 2019 Google announced a new update known as the BERT update. This update aimed to improve and enhance the way that Google’s algorithms could understand new searches. Full name Bidirectional Encoder Representations from Transformers, BERT is a natural language processor that could handle tasks such as entity recognition, speech tagging and question-answering. The biggest improvement for Google was its ability to better understand new words and queries. To understand how big of a problem this was, back in 2017 Google shared that around 15% of all global searches were considered new – Google had never seen those search queries before and as a result couldn’t provide the best quality SERP possible because it simply didn’t understand the search query very well. With the addition of BERT, Google tried to gauge a better understanding of these new queries to provide better search results.


So there we go, a brief history of SEO and Search from the 90s all the way up to 2019. There’s plenty more things to write about but it’s pretty hard to condense 30 years of history into a single weekly blog. There’s also plenty of upcoming stuff I could write about such as Google’s MUM, but this blog needs to end sometime so I think I’ll wait for that to release before I include it on this list.. So that’s it for the blog!