Sourced from Axandra
Search engines use automated software programs that crawl the web. These programs called “crawlers” or “spiders” go from link to link and store the text and the keywords from the pages in a database. “Googlebot” is the name of Google’s spider software.
Many webmasters have noticed that there are now two different Google spiders that index their web pages.
The normal Google spider with the callsign:
184.108.40.206 – “GET /robots.txt HTTP/1.0” 404 1227 “-” “Googlebot/2.1 (+http://www.google.com/bot.html)”
And the additional Google spider:
220.127.116.11 – “GET / HTTP/1.1” 200 38358 “-” “Mozilla/5.0 (compatible; Googlebot/2.1; +http://www.google.com/bot.html)”
What is the difference between these two Google spiders?
The new Google spider uses a slightly different user agent: “Mozilla/5.0”. This means that Googlebot now also accepts the HTTP 1.1 protocol. The new spider might be able to understand more content formats, including compressed HTML.
Why does Google do this?
The second theory is that Google’s extensive crawling might be a panic reaction because the index needs to be rebuilt from the ground up in a short time period. The reason for this might be that the old index contains too many spam pages.
What does this mean to your web site?
To obtain long-term results on search engines, it’s better to use ethical search engine optimisation methods. General information about Google’s web page spider can be found here. It’s likely that the new spider precedes a major Google update.