Search Engine Optimization and Marketing for E-commerce

Google's new Hummingbird algorithm...shall we dance?

by Andrew Kagan 27. September 2013 05:04

Forget about Penguin, Panda, Caffeine and all other tweaks to the 200+ ranking factors Google used in it's search results algorithms...they are all dead. Google announced that it had quietly rolled out Hummingbird, a complete redesign of its core ranking algorithm, affirming what SEOs had already seen...shifts in search results across many categories.

Google typically has not revealed much about Hummingbird, except to say it was more adept at teasing out the meaning of entire sentences in search queries, instead of just splitting them into keywords and weighting each word to provide search results. Much in the way Apple's Siri has attempted to interpret spoken commands, Hummingbird will attempt to contextualize keywords within a complete sentence, to better estimate intent. The result should be to find specific pages within a website instead of retargeting a website's home page in the results. The interim result is the infuriating shuffling of page rankings that is referred to as the "Google Dance."

While SEOs will be busy for the next couple of months measuring changes in SERPs and trying to interpret the weighting of various ranking factors, the core message is the same: focus on developing unique and relevant content, which will always be well-ranked no matter the algorithm.

Tags: ,

SEO

Cleaning up bad links with Google’s Disavow Tool

by akagan 30. October 2012 11:05

Google’s announcement and release of it’s new link-disavow tool October 16 is a welcome addition to your webmaster’s arsenal, allowing you to clean out inbound links (“backlinks”) from poorly ranked or negatively ranked websites. The disavow-links tool joins a similar tool released by Bing earlier in the year.

webmasterblogiconInbound links (links from other websites to pages on your website) are normally a good thing, helping Google and other search engines gauge how much interest there is in your webpages. Widespread abuse of this metric, however, primarily through the use of link-farms and paid-link schemes, has led to significant and frequent changes in Google’s ranking algorithms, creating churn in search engine results, especially for webpages with thousands of inbound links. The problem has been compounded by “grey hat” SEOs manipulating search results by deliberately pointing negatively weighted links at their competitors, also known as “link spam.”

Google already alerts webmasters (through it’s Webmaster Tools website) when there are questionable/negative or “unnatural” links pointing at their website, which would potentially (probably) negatively impact the webpage’s search rank. Until now, however, there wasn’t anything you could do to remove the links, assuming you weren’t responsible for them.

Google’s new disavow-link tool is an effort to address negative links, by allowing users of Webmaster Tools to upload a text file with a list of links to ignore. As most SEOs will realize, if Google is alerting you to “unnatural” links on your website, then you’re already likely being penalized for those links. The best defense is to aggressively monitor inbound links using Webmaster Tools, and actively keep track of and disavow links using the new tool. It should be noted that Google does not guarantee it will take disavow requests into consideration in all situations.

Google’s Spam Czar Matt Cutts posted a video explaining the process in detail. The text file has a simple format of one URL per line:

badsite.com/questionableBackLink.htm

Fortunately, you can also list entire websites using the shorthand format:

domain: badsite.com

Each time you add to this list, you must upload all the links again. Cutts cautions that extra care should be used with the disavow tool, to prevent inadvertently removing valuable links from their web page rankings.

Unsure about how or when to use this tool? Then don’t…Google has provided it as an advanced method of correcting problems with backlinks, and it is designed for SEO professionals.

Tags: , , ,

SEO

Implementing Google Trusted Stores

by akagan 10. June 2012 12:45

trustedstoreGoogle announced this week it’s new “Trusted Stores” program, which validates online merchants and monitors their websites for shipping reliability and customer service, is up and running. It seems likely that for participating merchants, having the “Google Trusted Store” badge appear on their website will increase conversions. Google is provided “free purchase protection” for customers that purchase from trusted stores.

Opting into the Trusted Stores program is relatively simple, and made easier if you already publish a product feed through Google’s Merchant program (formerly known as “Froogle” and later on “Google Base”). From it’s merchant sign-up page, merchants are prompted to enter some basic business information, as well as links to their website and various customer-service and privacy-policy pages. Once configured, you can link the trusted stores account to a Google Merchant Center product feed, or upload a product feed in xml format. Linking to an existing merchant account reduces this process to a couple of clicks, and is only one of many reasons to set up a merchant account with Google.

After configuring the account, you need to add the provided javascript code to your website pages, and a second tracking script to your order confirmation page, similar to Google AdWords’ conversion code. Google requires this information so that it can respond to customer service inquiries as part of the “purchase protection” it provides consumers.

Hurry Up and Wait

Once configured, Google places your website in “monitoring mode” until it’s collected enough data to validate your store. During this time the tracking codes are active but the Trusted Store logo will not appear…and it could be some period of time depending on how many orders you process a month. Says Google:

Once you've finished integrating, we will test and validate the data passed via the JavaScript and feeds for a period of time (minimum 28 days and 1,000 orders) and inform you of any integration issues.

During this test period, your site will remain in “monitoring mode,” meaning that the Google Trusted Store badge will not be visible on your site; however, the program may display a module to customers who purchase from your site with an option to opt-in for a survey about their order experience with your store.

It appears that in this initial rollout of Trusted Stores, Google wants to be very careful about validating stores and not damaging their own reputation by backing fly-by-night or black-hat ventures that log thousands of quick sales then shut down. For many small online vendors, reaching these minimum order levels could take months, so Trusted Stores may not be beneficial for every online merchant.

trustedpopOnce clear of the monitoring period, the logo will appear with a mouseover that will pop up a merchant “quick view” ranking their Shipping and Customer Service using a letter rank from “A” to “F”, as well as the number of on-time orders, average shipping days to deliver, and number of customer-service issues resolved and in what timeframe.

On checkout, customers will be provided with the option to opt-in for Google’s “Purchase Protection”, which will share the store’s order information with Google, and allow the customer to communicate with Google about the specific order and have Google intercede with the merchant. Merchants can monitor and these issues and mark them as resolved through their Stores account, although they must work directly with the consumer to resolve the issue.

Google polls each customer opting in for Purchase Protection in the same way Amazon asks customers to rate sellers, with a short survey on delivery time, shipping costs and customer service issues. Customer survey responses are used for the ratings system.

Google provides some simple merchant guidelines for responding to customer service complaints, but it’s the merchant’s responsibility to manage the entire customer service process. Having policies in place to handle customer inquiries is an important step to reducing negative ratings and costs.

Search Partner Pro recommends joining Google’s Trusted Stores Merchant program, as it may provide a valuable tool in increasing conversions, and it’s implementation costs are minimal. One can expect that there will be an added benefit in Google’s shopping rankings for stores already publishing product feeds to Google. We have already set up several clients in the program, and hope to have real-world data soon.

Tags: , , ,

Fighting back against black-hat SEO techniques

by akagan 24. May 2012 17:03

Matt Cutts of Google acknowledges that Google can’t stop every black hat SEO technique to prevent your competitors from spamming Google’s search results, and in the course of your optimization you will often find bogus search results above your website for certain keywords.

Google relies on crowdsourcing to keep things honest, and spamming is no exception to this rule. Google relies on SEOs and website admins to identify search result spam and report it to Google using Webmaster Tools, and Cutts suggests using the Webmaster Forum to raise awareness of new black hat techniques and hacks.

Title Spam is still happening

A good example of crowdsourced reporting occurred last year with the Wordpress Title Hack. Unwitting Wordpress users allowed their .htaccess file to be edited, changing the title tag seen by Google’s bot, but not affecting the page itself. This led to title tags in search results being hijacked:

titlespam

As seen from this screencap taken a year later, many site owners are unaware that they’ve been hacked. What’s interesting is that Google is not actively searching for title tag spam, or if they are, they haven’t been able to catch up to this particular hack.

“We’re happy to get spam reports” notes Cutts, and he mentioned that there is a “human” team of anti-spammers at the Googleplex that process spam reports.

Tags: , ,

SEO

Yahoo officially rejects use of META Keywords tag

by Andrew Kagan 10. October 2009 11:00

Following up on Google's announcement a few weeks ago that it gave zero relevance to the META Keywords tag in it's search engine rankings, #2 SE Yahoo announced at SMX East this past week in NYC that it, too, gave no relevance to this deprecated tag.
 
Of the three major search engine databases, only Inktomi (which was purchased by Yahoo in 2002) had continued to use this META tag in it's search rankings, although it's relevance was vastly diminished over time. 
 
In an open Q&A session during the conference, Cris Pierry, senior director of search at Yahoo, announced that support for the META Keywords tag in Yahoo's search engine in fact had ended several months ago.
 
Google has always maintained that it never supported the keywords tag, but made the official announcement recently to dispel lingering rumors about it. AltaVista officially dropped support of the tag in 2002.
 
Besides Yahoo, the Inktomi engine had provided ranking to MSN, AOL and others over the years. Microsoft shifted to it's own Bing search engine earlier this year, and the #3 search provider has already announced that it grants no relevance to the keywords tag.
 
Still, many SEO's are clinging to the belief that somewhere out there the keywords tag may still have relevance, but to expend any effort on using this tag is just a waste of time at this point, and a waste of client money that could be better spent on multivariate testing of pages to improve search engine rank.
Another argument defending use of keywords is that it provides additional keyword variant matching that might not be incorporated into the content of the page...but this argument fails if the META data is disregarded entirely. A better approach would be to identify the keyword variants with greatest value and incorporate them into the page content...this is completely white-hat and when written properly will boost the relevance of the page.

Tags: , , , , ,

SEO

Server Location matters for TLD and Local placement (Google)

by Andrew Kagan 8. June 2009 08:14

Google's Matt Cutts posted a video reply recently on whether a server's physical location affects search rank...surprise...it does!

Matt pointed out that in the early days of Google (ca. 2000) the only locational reference used was the TLD (top-level domain) of a website, so if your URL ended in ".FR" then it was assumed your website was franco-centric and would be more relevant than a website ending in ".UK". 

The explosion of TLDs of late makes it harder to pinpoint relevance based on URL, so Google is also using the IP address (and parent NetBlock) of the server to identify it's location...so a server located in France will receive more weight for french queries than a server located elsewhere. How much this contributes to overall rank is debatable, but likely it's more important for local search results.

So if you have a web presence in multiple countries, it might make sense to locate servers locally to your markets...certainly it might improve the latency of queries (although again, it would depend on the ISP). There are also IPPs that offer hosting on multiple netblocks in specific territories to achieve the same effect.

Tags: , , ,

SEO

Amazon's 20-million-URL Sitemap

by Andrew Kagan 15. May 2009 09:57

The 18th annual International World Wide Web Conference, WWW 2009, was held this past April 20-24 in Madrid, Spain, and it has become the premiere event to publish research and development on the evolution of our favorite medium.

A fascinating (if you're a web geek) paper was presented by Uri Schonfeld of UCLA and Narayanan Shivakumar of Google called "Sitemaps: Above and Beyond the Crawl of Duty". The main thrust of the paper was that traditional web crawlers employed by search engines are becoming overwhelmed by number of new websites and pages appearing daily on the web; by one count, there are more than 3 trillion (!) pages that need to be indexed, deduplicated, and tracked for inbound/outbound links.

The Sitemaps protocol is becoming more and more important to search engines as they try to prioritize and filter this mound of information, and part of the problem is the rise of large-scale CMS systems, which dynamically generate pages regardless of whether there's any real content in them or not. They used the example of Amazon.com, which for any given product will have dozens of subsidiary pages, such as reader reviews, excerpts, images, specifications. Even if there is no content, the link to a dynamically generated page will still return a page with no data in it, creating literally tens of millions of unique URLs at amazon.com, which "dumb" crawlers must follow and index.

The Sitemap protocol defines an XML file format for search engines to use which not only lists all the URLs that should be indexed, but also provides information on how important the page is, how often it's updated, and when it was last updated. Search engines can use this file to rapidly index the important content and ignore what isn't there, improving the accuracy and time taken to index a website. Every site should have a sitemap, but as of October 2008 it was estimated that only 35 million sitemaps have been published, out of billions of URLs.

Amazon makes a concerted effort to publish accurate sitemap data, as it dramatically reduces the time required to index new content. Even so, Amazon's robots.txt file lists more than 10,000 sitemap files, each holding between 20,000 and 50,000 URLs, for a total of more than 20 million URLs on amazon.com alone! The authors note that there is still a lot of content duplication and null content pages there, but the number is staggeringly large. After monitoring URLs on another website, they also noted that sitemap crawlers picked up new content significantly faster over time than when using the simple "discovery" method used when there is no sitemap file.

We said before that every website should have a properly constructed sitemap, as it will improve the quality and accuracy of search engines as a whole. Beyond creating the sitemap, registering it with major search engines will provide valuable feedback for the webmaster on crawl and index rates, and provide insights into what the search engine "sees" when it looks at your website. Please create a sitemap for your website today, or just ask us if you need help!

Tags: , , , , , ,

SEO

Auto-Submitting Sitemaps to Google...Necessary?

by Andrew Kagan 1. May 2009 10:13

Google's Webmaster Tools provides webmasters with a way to upload XML sitemaps to improve the accuracy of Google's index. Registering and maintaining an accurate sitemap (Google, Yahoo, and Microsoft all accept sitemap data) is important to proper indexing of your website pages, and Google provides two methods for notifying them when the sitemap is updated: manually through the Google website, and "semi-automatically" by sending an HTTP request that signals Google to reload the sitemap.

Ping me when you're ready

The second method can be automated through server-side scripting, so that when content on a website or blog is updated, the sitemap file is updated as well, and the update request is sent to Google at the same time. In theory, this should provide rapid updating of Google's index to include the latest content on your website.

Depending on a number of factors, Google will automatically reload your sitemap file without you specifically requesting it to do so. One factor is the content of the sitemap itself. Besides a list of URLs on your website, the sitemap file can also hold information about date the URL was last updated, and how frequently it is updated. For example, if your homepage content changes every day, you can assign a frequency of "daily" to that URL, telling the search engine it should check that page every day.

It should be noted that incorrect use (or "abuse") of a sitemap, such as indicating pages are new when the content hasn't changed, can cause problems if the search engine recrawls the page too many times without seeing any new data. Empirical data have shown that pages may be dropped from the search engine index under this scenario, and new pages added to this "unreliable" sitemap may be ignored or crawled more slowly.

It's a popularity contest

Another factor in sitemap reloading is link popularity. If a lot of websites are linking to particular pages on your website, search engine spiders will crawl those pages more often, and if the site is large, the sitemap will help prioritize which pages are crawled first.

To Submit, or Not to submit...

We have seen that once a sitemap is submitted and indexed by search engines, they will regulary come back and reload the sitemap looking for new URLs, whether you re-submit it or not. As your website's pagerank (on Google) and general link popularity grows, there's an increase in the frequency that the sitemap will be reloaded, without your taking any action...so do you need to submit it manually or automatically?

The answer is "it depends". Google itself warns webmasters not to resubmit sitemaps more than once per hour, probably because that's as fast as it's going to process the changes and redirect Googlebot to the URLs in the sitemap. If you are auto-submitting sitemaps more than once an hour, the "punishment" could range from the SE ignoring the subsequent re-submits, to something more dire...but no one really knows the consequences. It would probably be safer to resubmit sitemaps on a regular schedule, but we do not have any hard data about this at this time.

When you Should re-submit a sitemap

So when should you re-submit a sitemap? The obvious answer is whenever your content changes, but not more than once an hour. Google does not yet provide an API to query when it last loaded your sitemap, although you can see this data in its Webmaster Tools. If you have some very timely news that the SE really needs to know about, then resubmit the sitemap...it may not increase the crawl rate, but it may impact which URLs are crawled first.

The bottom line is that sitemaps are becoming increasingly important to search engines to help them prioritize the content they crawl, so use them, don't abuse them, help the internet be a better place!

Tags: , , ,

SEO

Mission Control we have liftoff!

by Andrew Kagan 29. April 2009 04:57

Launching the Searchpartner.pro website was an interesting experiment in measuring Google's crawl rate. The domain had been parked at a registrar for some time, nearly a year, so Googlebot and other crawlers would have known about it, but would not have found any content. This may have been a negative factor in the subsequent crawl rate.

Before launching the website, all the appropriate actions were taken to insure a rapid crawl and index rate:

 

  • Creation of all relevant pages, with informational pages of high quality and narrow focus
  • Implementation of appropriate META data
  • Validation of all links and HTML markup
  • Implementation of crawler support files such as robots.txt and an XML sitemap 
Finally a sitemap was registered with Google and the site brought online...and then the waiting began. 
  • It took more than two days (approx. 57 hours) after registering the sitemap for Google to actually parse it. Google found no errors.
  • It took three more days after parsing the sitemap for Googlebot to actually crawl the site. 
  • More than 24 hours after crawling the site, Google had added only three pages to its index.
It seems that the days of "launch today, indexed tomorrow" are in the past. Even with publishing a website based on Google's best practices, it seems that Google is somewhat overwhelmed at this point and crawl rates for new sites are being delayed.

Two unknowns:
  • Does leaving a domain parked for a long time negatively impact the initial crawl rate?
  • Does the TLD -- "COM", "NET", "PRO" -- affect the crawl rate? Does Google give precedence to well-regarded TLDs over new/marginal TLDs?

I will be testing this hypothesis with additional sites in the near future. 

 

Tags: , , ,

General | SEO

Powered by BlogEngine.NET 3.3.0.0
Theme by Mads Kristensen updated by Search Partner Pro