How Scraping Changed SEO | TechRadar

0


SEO has always been a unique endeavor. Not only are participants pitted against each other as they battle for the top spot on Search Engine Results Pages (SERPs), but in almost all cases, data-driven guesswork is the best tool they can use. they dispose.

Well, these days it might not be as much guesswork as it used to be. Since the advent of web scraping, SEOs have gained the ability to collect tons of data – something that is constantly being converted into what’s called “organic channel strategy”. Hopes of reaching number 1 in the SERPs are usually included.

About the Author

Rasa SosnovskytÄ— is SEO Manager at Oxylabs.io.

Unsurprisingly, having tons of data comes in handy if your primary source of work is reverse engineering an almost magical black box. Today, we could hardly imagine starting SEO from scratch. It is now an interwoven web of data, practical experiences and hypotheses. But above all data.

But how did we get there?

According to the Search Engine Journal, SEO, as a practice, began in the late 1990s. The fun story of the genesis is that the director of Jefferson Airplane (a rock band) was unhappy that his website was being on the 4th page rather than the 1st. The boring story of the genesis is that the words “search engine optimization” were first used by John Audette, the owner of Multimedia Marketing Group, as part of a marketing strategy.

Whatever we choose to believe, SEO back then was not that similar to its current form. Search engines had not yet achieved dominance, and the Internet landscape was still littered with human-edited catalogs such as Yahoo Directory or DMOZ.

People, as it was called, “surfed the web,” going from one website to another. Finding what you are looking for would take more than a few clicks. As a result, a big part of SEO back then was about getting websites into catalogs properly.

Eventually, search engines replaced human-edited catalogs. While some of these were still being updated in 2017, it had been a lot longer than I had heard from anyone using them. I doubt you did either.

There was a problem, however. Human cataloging is quite predictable. Search engines aren’t so much, especially when the inner workings are a closely guarded secret. Referencing would become less an accounting activity than an engineering activity.

SEOs have been lucky, however. In the beginning, search engines weren’t that complex. Some may remember the days when inserting a keyword ten million times made the page appear on the first page of SERPs. These ranking algorithms were quickly replaced by more sophisticated algorithms. Namely, Google’s PageRank in the early 2000s.

The introduction of Google on the scene was a step towards real complexity. Websites were now rated on the amount of inbound and outbound links instead of just keywords – something Google admits has been taken out of academia. Of course, it’s not very difficult to abuse it either. And that’s exactly what happened.

Since then, ranking algorithms have only increased in complexity, with Google disclosing them less and less. On rare occasions, John Mueller, Google’s search attorney, steps out of the alien tech-focused black box to answer certain questions, most of the time in vague ways.

Due to constant changes, updates and adjustments, the current ranking is influenced by a huge number of factors. People on the outside may even think SEO specialists speak a different language. No wonder there was a trend for the “9001 Level SEO Assistant” job title back then.

But if ranking algorithms quickly became complex, how did SEOs keep up with the trends? They held on, mostly through reverse engineering. SEOs shared their knowledge with each other, tested hypotheses and wrote their findings on blogs.

A revolution occurred around the 2010s when web scraping started to be more ubiquitous. Names of “households” such as Ahrefs and Mangools were founded with the promise that technology would change SEO.

The all-seeing eye

When you need to figure out how something works without having any inside knowledge, the best way to do it is to try and break it. If that’s not possible, lots of data, analysis, and conclusions are the second best way.

This is what a lot of SEOs have done. Following in Google’s footsteps, developers created the scraping and web crawling tools that collected and indexed vast swathes of data.

Simply put, web scrapers are simply applications that run through a set of URLs (some of which can be acquired on their own), download the data, and present it in a readable format. Data is collected from a variety of sources, starting with website homepages and ending with the SERPs themselves. These are sometimes referred to as SERP scrapers.

SERP scrapers are used more frequently than most SEO specialists realize. While small marketing agencies look for data elsewhere, many of the tools used to develop information on a daily basis depend on SERP scrapers. According to our data, our use of the SERP Scraper API has been steadily growing and our current year-over-year requests have increased by + 36%.

The idea is pretty brilliant, really. SERPs are the best objective metric available. While Google can rock the results from time to time, most of them stay static unless something on the website changes. This is when you want to look at the results.

Unsurprisingly, SERP scrapers provide access to an all-seeing eye. Small shifts are noticed and sent to a cold, dark place for analysis. In turn, SEO tools provide recommendations based on the collected data, and specialists write lengthy essays about them.

And so, the concerted effort to reverse engineer a black box continues into eternity (or, at least, for the foreseeable future). Still, scratching has been a boon for SEOs. So much so that most of the profession revolves around scratching. You won’t find many SEOs who don’t have dedicated tools. Without scratching, these tools could not exist.

Conclusion

SEOs and search engines are in a constant, but rather friendly, tug-of-war. The former is always trying to figure out the latest changes in ranking algorithms. These make ranking algorithms more complex over time, in part to provide better results, in part to avoid the potential for abuse.

Unfortunately for most SEOs, tugs on the search engine side are usually pretty important. The only thing that allows them to survive are the Skynet-esque robots (only good ones) which make it easier to predict and recover from the tugs.


Share.

Comments are closed.