Search engines responded by developing more complex ranking algorithms, taking into account additional factors that were more difficult for webmasters to manipulate.Ĭompanies that employ overly aggressive techniques can get their client websites banned from the search results. Since the success and popularity of a search engine is determined by its ability to produce the most relevant results to any given search, poor quality or irrelevant search results could lead users to find other search sources. This meant moving away from heavy reliance on term density to a more holistic process for scoring semantic signals. To provide better results to their users, search engines had to adapt to ensure their results pages showed the most relevant search results, rather than unrelated pages stuffed with numerous keywords by unscrupulous webmasters. īy heavily relying on factors such as keyword density, which were exclusively within a webmaster's control, early search engines suffered from abuse and ranking manipulation. Early search engines, such as Altavista and Infoseek, adjusted their algorithms to prevent webmasters from manipulating rankings. By 1997, search engine designers recognized that webmasters were making efforts to rank well in their search engine, and that some webmasters were even manipulating their rankings in search results by stuffing pages with excessive or irrelevant keywords. Web content providers also manipulated some attributes within the HTML source of a page in an attempt to rank well in search engines. Flawed data in meta tags such as those that were not accurate, complete, or falsely attributes created the potential for pages to be mischaracterized in irrelevant searches. Using metadata to index pages was found to be less than reliable, however, because the webmaster's choice of keywords in the meta tag could potentially be an inaccurate representation of the site's actual content. Meta tags provide a guide to each page's content. Įarly versions of search algorithms relied on webmaster-provided information such as the keyword meta tag or index files in engines like ALIWEB. Sullivan credits Bruce Clay as one of the first people to popularize the term. According to industry analyst Danny Sullivan, the phrase "search engine optimization" probably came into use in 1997. Website owners recognized the value of a high ranking and visibility in search engine results, creating an opportunity for both white hat and black hat SEO practitioners. All of this information is then placed into a scheduler for crawling at a later date. A second program, known as an indexer, extracts information about the page, such as the words it contains, where they are located, and any weight for specific words, as well as all links the page contains. The process involves a search engine spider downloading a page and storing it on the search engine's own server. Initially, all webmasters only needed to submit the address of a page, or URL, to the various engines which would send a web crawler to crawl that page, extract links to other pages from it, and return information found on the page to be indexed. Webmasters and content providers began optimizing websites for search engines in the mid-1990s, as the first search engines were cataloging the early Web. 2.4 White hat versus black hat techniques.