Search Engine Optimization and Marketing for E-commerce

Google's new Hummingbird algorithm...shall we dance?

by Andrew Kagan 27. September 2013 05:04

Forget about Penguin, Panda, Caffeine and all other tweaks to the 200+ ranking factors Google used in it's search results algorithms...they are all dead. Google announced that it had quietly rolled out Hummingbird, a complete redesign of its core ranking algorithm, affirming what SEOs had already seen...shifts in search results across many categories.

Google typically has not revealed much about Hummingbird, except to say it was more adept at teasing out the meaning of entire sentences in search queries, instead of just splitting them into keywords and weighting each word to provide search results. Much in the way Apple's Siri has attempted to interpret spoken commands, Hummingbird will attempt to contextualize keywords within a complete sentence, to better estimate intent. The result should be to find specific pages within a website instead of retargeting a website's home page in the results. The interim result is the infuriating shuffling of page rankings that is referred to as the "Google Dance."

While SEOs will be busy for the next couple of months measuring changes in SERPs and trying to interpret the weighting of various ranking factors, the core message is the same: focus on developing unique and relevant content, which will always be well-ranked no matter the algorithm.

Tags: ,

SEO

Google's next index update will have big impact on SEO

by Andrew Kagan 15. May 2013 08:14

Google's Matt Cutts revealed Monday (May 13) that the search engine is close to releasing it's first major revision to its webspam filter internally referred to as "Penguin" (to differentiate it from the previous "Panda" updates that primarily targeted content farms). "We're relatively close to deploying the next generation of Penguin" he said in a Webmaster Video, referring to it as "Penguin 2.0".

The new version which would target "black hat web spam" is said to be "more comprehensive" and "go a little bit deeper and have a little more impact" than the previous release. "Advertorials" are also a target of new "enforcement" of quality guidelines, which was a major target of Panda. 

Cutts mentioned that Google would also be trying to better filter "spammy" keywords (keywords traditionally targeted by spammers, such as "payday loans" and salacious terms targeted by pornography spam), and by preventing link juice from "flowing upstream" back to the websites seeking the traffic.

All of the updates above were referred to as being rolled out "in the next few months" (he also mentioned "Summer"), but some SEOs are already reporting changes in search rank as of today.

Cutts also mentioned development was underway of a "completely different system" that would analyze links with an eye to negating the benefit of linkspam.

Dialing Back Panda's Impact on Content Aggregators

One of the significant impacts of the Panda updates, which were rolled out about 18 months ago, was to severely cripple the ranking of content aggregators...websites that pulled content from other websites and tried to organize it topically. Many of these websites lost 90% or more of their traffic, in effect shutting them down. Cutts said Google was trying to separate legitimate aggregators from content farms, but acknowledged that those sitting on the "grey line" dividing the two camps was difficult.

Tags:

SEO

Cleaning up bad links with Google’s Disavow Tool

by akagan 30. October 2012 11:05

Google’s announcement and release of it’s new link-disavow tool October 16 is a welcome addition to your webmaster’s arsenal, allowing you to clean out inbound links (“backlinks”) from poorly ranked or negatively ranked websites. The disavow-links tool joins a similar tool released by Bing earlier in the year.

webmasterblogiconInbound links (links from other websites to pages on your website) are normally a good thing, helping Google and other search engines gauge how much interest there is in your webpages. Widespread abuse of this metric, however, primarily through the use of link-farms and paid-link schemes, has led to significant and frequent changes in Google’s ranking algorithms, creating churn in search engine results, especially for webpages with thousands of inbound links. The problem has been compounded by “grey hat” SEOs manipulating search results by deliberately pointing negatively weighted links at their competitors, also known as “link spam.”

Google already alerts webmasters (through it’s Webmaster Tools website) when there are questionable/negative or “unnatural” links pointing at their website, which would potentially (probably) negatively impact the webpage’s search rank. Until now, however, there wasn’t anything you could do to remove the links, assuming you weren’t responsible for them.

Google’s new disavow-link tool is an effort to address negative links, by allowing users of Webmaster Tools to upload a text file with a list of links to ignore. As most SEOs will realize, if Google is alerting you to “unnatural” links on your website, then you’re already likely being penalized for those links. The best defense is to aggressively monitor inbound links using Webmaster Tools, and actively keep track of and disavow links using the new tool. It should be noted that Google does not guarantee it will take disavow requests into consideration in all situations.

Google’s Spam Czar Matt Cutts posted a video explaining the process in detail. The text file has a simple format of one URL per line:

badsite.com/questionableBackLink.htm

Fortunately, you can also list entire websites using the shorthand format:

domain: badsite.com

Each time you add to this list, you must upload all the links again. Cutts cautions that extra care should be used with the disavow tool, to prevent inadvertently removing valuable links from their web page rankings.

Unsure about how or when to use this tool? Then don’t…Google has provided it as an advanced method of correcting problems with backlinks, and it is designed for SEO professionals.

Tags: , , ,

SEO

Fighting back against black-hat SEO techniques

by akagan 24. May 2012 17:03

Matt Cutts of Google acknowledges that Google can’t stop every black hat SEO technique to prevent your competitors from spamming Google’s search results, and in the course of your optimization you will often find bogus search results above your website for certain keywords.

Google relies on crowdsourcing to keep things honest, and spamming is no exception to this rule. Google relies on SEOs and website admins to identify search result spam and report it to Google using Webmaster Tools, and Cutts suggests using the Webmaster Forum to raise awareness of new black hat techniques and hacks.

Title Spam is still happening

A good example of crowdsourced reporting occurred last year with the Wordpress Title Hack. Unwitting Wordpress users allowed their .htaccess file to be edited, changing the title tag seen by Google’s bot, but not affecting the page itself. This led to title tags in search results being hijacked:

titlespam

As seen from this screencap taken a year later, many site owners are unaware that they’ve been hacked. What’s interesting is that Google is not actively searching for title tag spam, or if they are, they haven’t been able to catch up to this particular hack.

“We’re happy to get spam reports” notes Cutts, and he mentioned that there is a “human” team of anti-spammers at the Googleplex that process spam reports.

Tags: , ,

SEO

Google Analytics to add Real-Time Updating

by akagan 17. November 2011 14:09

Over the next few weeks, Google announced it will be rolling out real-time updating to Analytics users along with more sophisticated analysis tools. The new reporting and analytics tools will be rolled out to all users as “v5” of the popular free web analytics tool.

The new features had been in beta for the past several months and include a revised reporting interface as well as several new tools. Perhaps most important is the new “Real-Time” reporting, which claims to update analytics data within seconds of the page being viewed.

Traditionally, Analytics users had to wait up to several hours to see traffic on their websites reflected in the Analytics data…akin to having stock market delayed when trying to make investment decisions. This delay had been the greatest complaint amongst Analytics users, leading many to move “up” to more powerful, fee-based analytics tools with lower latency in their reporting. Real-time reporting should bring more users back to Google, which relies on Analytics data for much of its paid-advertising metrics.

Better conversion tracking

The new reporting tools include “Multi-Channel Funnels”, which Google claims will improve conversion tracking by displaying user site-activity up to 30 days prior to purchasing or conversion activity. Website owners will be able to better visualize which marketing channels are contributing to a conversion, whereas prior to this update they could only credit the last marketing interaction with the conversion.

Analytics also has new visualization tools to better capture visitors’ interactions with your website. The new “Flow Visualization” tool overlays multiple user paths simultaneously, providing better analysis of paths through the website ending or starting from the same point.

Mobile Mania with better device tracking

Google beefed up Mobile Reporting with more detailed information on mobile visitors, including which devices and mobile platforms they’re using. This information will be very valuable to website owners developing mobile versions of their websites, as well as planning and targeting advertising at specific mobile devices.

Users will be able to revert to the previous version of the Analytics interface for several months, allowing users time to migrate custom reports to the new interface.

Tags:

SEO

Microsoft Expands Content-Farm Filtering Internationally

by Andrew Kagan 11. April 2011 12:54

Google announced today that it had expanded the content filters it put in place this past February in the U.S. to all its English-language search engines internationally. The algorithmic change (code named “Panda”) was an effort to improve search engine results by filtering out “low quality” pages and websites, typically content farms and link farms that tried to boost relevance by generating millions of links to website pages. While the move will likely improve the ranking of white-hat SEO websites, it does create new challenges for optimizing webpages.

Google has estimated that the original “tweak” to its ranking algorithm affected about 12% of all search queries, and its implementation led to a new “Google Dance” as keyword rankings oscillated wildly before settling down. Google expects the additional rollout will broadly affect rankings for many websites.

Trying to leverage link popularity

Since the Caffeine update early last year, Google has steadily expanded its attempts to incorporate “link popularity” into its rankings, in an effort to make it’s results more timely, even reflecting up-to-the-minute changes. An important technique for this was to monitor the number of “inbound” links to a webpage (other websites linking to a given webpage). This would provide a “relevance boost” if many people appeared to be interested in a news event, or story, or blog post of import.

As Google came to give more relevance to referral links, the balance slowly shifted away from the content on the webpage, and more to its popularity. Thus during the Caffeine rollout, many webapages with high relevance and solid content suddenly dropped in search rankings against websites with many inbound links.

The  problem got worse when people began abusing this algorithmic sensitivity, by “seeding” millions of links all over the web to a particular webpage, forcing it to the top of the search results for a particular keyword. These auto-generated pages generally had little information on them other than the keywords and link they were trying to promote.

It has proven particularly difficult for Google to weed out these forced links from natural ones, as was witnessed in the JC Penney Linkbait Scandal over the christmas holidays last year. The problems are magnified as we get into the “long tail” of search results, where very specific search phrases return fewer search results, and there is more opportunity to manipulate those results on a page-by-page basis.

The new changes are designed to take link popularity into account, but devalue links coming from low-quality pages. Unfortunately for the rest of the universe, we don’t know exactly how Google defines “pages of low quality”, but we certainly know what Google’s goals are in estimating page quality…relevant content, and more than just a paragraph of bogus copy and a link.

User Data Further Contributes to Rankings

Google also revealed that it was now incorporating user actions to block sites in its ranking calculations. Google had initially used blocked-site data (reported back to Google by the Google Toolbar extension) to corroborate it’s own data, but the correlation was so high (>84%) that the search engine will now use user data as a secondary factor (also known as a “signal”) in search ranking.

As reported by Vanessa Fox, a contributing editor at SearchEngineLand, Amit Singhal at Google cautioned that “high-quality” website should not be affected by the algo changes, but encourages SEOs to use Google’s Webmaster Forums to alert Google to any ranking problems created by the rollout of the new algorithm.

A renewed focus on content

Google’s judgment of link quality is likely to affect the relevance any pages with limited content. For SEOs and their clients, it underscores more heavily than ever the need to develop quality, relevant content for website’s particular market focus, and to do a better job isolating long-tail keywords and targeting content specifically for them.

As Google juggles page relevance with popularity, rankings will continue to shift. Knowing your most valuable keywords and targeting them in your webpages is the only strategy that makes sense moving forward.

Tags:

SEO

JC Penney Linkbait Scam Exposed, Penalized by Google

by Andrew Kagan 14. February 2011 09:08

The New York Times reported on Sunday that J.C. Penney had been exposed for implementing link-bait on an unprecedented scale, skewing search results and leading Google to levy severe penalties on the company's page rank against important keywords.

The article, titled The Dirty Little Secrets of Search, detailed how Penney had enjoyed 1st position results for highly competitive keywords like "dresses", "bedding", and "area rugs". and more valuable terms like "skinny jeans", "home decor" and "comforter sets". What was revealed was a widespread campaign of seeding thousands of links to J. C. Penney on largely irrelevant and unrelated, even obscure, websites. This process, commonly known as "link-farming", is a well-known black-hat technique for gaming Google's search results, even though Google publicly announced several years ago that it was preventing this techniqe from skewing its search results.

Apparently, not so...while Penney feigned ignorance about the use of link-farming, it summarily fired its search company, and Google proceeded to take punitive action by de-ranking the company for various keywords.

While Google insists that external links have less importance to a web page's SERPs than content, it is an inextricable component that Google can't ignore, especially as a gauge of momentary popularity. While Google claims to monitor SERPs for evidence of link-farming, it is a larger problem to identify social-media link abuses, as these results are critical to Google's "real time search" rank that takes into account social media links from Facebook, Twitter et al.

What's chilling for most white-hat SEOs is that black-hat techniques are alive and well, and put white hats at a competitive disadvantage. A black hat interviewed for the article implied that "S.E.O. is a game, and if you’re not paying black hats, you are losing to rivals with fewer compunctions." Even Matt Cutts, Google's top search-spam cop, noted that it's impossible for Google to police every link scam, although they do red-flag suspicious things like rapid growth of inbound links. It shows, however, that any proactive action on Google's part requires manual intervention by an employee; there is no automated process in place yet to deal with this type of exploit.

 

Tags: ,

SEO

SEO Improvements in ASP.NET 4

by Andrew Kagan 15. November 2010 07:47

Microsoft’s latest iteration of its server-side programming language, .NET Framework 4, has new features designed specifically to address SEO shortcomings in previous versions of the Framework. In a recent whitepaper on Microsoft’s ASP.NET website, a number of these features were detailed:

Permanent (“301”) Redirects

Permanent redirects send a visitor from one page to another, but their function in SEO is critical because they tell search engine crawlers requesting the old page to transfer all it’s accumulated rank to the new page, and simultaneously drop it from the search engine’s results.

Returning a “301” response in the header tells the requestor that this redirection is permanent (a “302” response tells the requestor that the page is temporarily unavailable, which does not help SEO).

Prior to Framework 4, permanent “301” redirects were accomplished in one of two ways, either injecting the response object headers into the page before it’s sent back to the requestor (using “response.addheader”), or using IIS 7’s optional URLRewrite module (prior to IIS 7, rewriting was usually performed by a 3rd-party ISAPI module). Now permanent redirects can be handled in a single line of code:

RedirectPermanent("/newpath/foroldcontent.aspx");

If your website uses dynamic pages built from a database, this new syntax makes it incredibly simple to manage site upgrades and URL changes. If you’re trying to permanently redirect static content to new pages, URLRewrite maps are still your best friend (now directly supported by .NET 4’s routing engine).

Setting Keywords and Description META tags

While the keywords META tag has pretty much fallen out of use on major search engines, the META Description tag is still one of the most important elements of SEO, since it’s content is both used by search engines and displayed in search results. It plays a critical role in click-through rates and needs to correspond closely with the page content. It also needs to be unique to each page it appears on, or it will negatively impact search rank.

.Net Framework 4 provides two new methods for adding META tags at runtime (before the page has been sent to the requestor). You can now add them directly to the “@Page” directive at the top of every ASPX page, along with the page TITLE:

<%@ Page Language="C#" AutoEventWireup="true" CodeFile="Default.aspx.cs" Inherits="_Default" Keywords="These, are, my, keywords"  
Description="This is a description" %> 

When the page is rendered and sent back to the browser, these elements will be correctly rendered:

<head id="Head1" runat="server"> 
  <title>Untitled Page</title> 
  <meta name="keywords" content="These, are, my, keywords" /> 
  <meta name="description" content="This is the description of my page" /> 
</head> 

Prior to Framework 4, programmers commonly manually inserted this content using placeholders, or overloaded the “header” declaration in the page object. Now, the page object itself has been extended with specfic methods, making for a one-line solution:

Page.MetaKeywords = “My, keywords”
Page.MetaDescription = “My Description”

For dynamically generated content, this is a tighter and less error-prone method, and allows for a complete separation of programming and page design.

Improved Browser Capability Providers

Another frustration for .NET developers has been browser compatibility. If you’re designing pages to display properly on mobile devices as well as desktop computers, the “.browser” file tells .NET how to compile the pages for a specific browser. This process has been streamlined in .NET using the new, cachable and extensible HttpCapabilitiesProvider.

Replacing URLRewrite with Routing

Most web developers have a love-hate relationship with Microsoft’s URLRewrite Module, which allows rewriting and redirection rules to be applied before pages are compiled. While IIS 7 provided a simple rewrite-rule generator, it was accessed through the IIS Admin interface, limiting access to developers using shared hosting environments.

Routing support existed in .NET 3.5 sp1, but it has been simplified and improved in .NET 4 with the following features:

  • The PageRouteHandler class, which is a simple HTTP handler that you use when you define routes. The class passes data to the page that the request is routed to.
  • The new properties HttpRequest.RequestContext and Page.RouteData (which is a proxy for theHttpRequest.RequestContext.RouteData object). These properties make it easier to access information that is passed from the route.
  • The following new expression builders, which are defined inSystem.Web.Compilation.RouteUrlExpressionBuilder andSystem.Web.Compilation.RouteValueExpressionBuilder:
  • RouteUrl, which provides a simple way to create a URL that corresponds to a route URL within an ASP.NET server control.
  • RouteValue, which provides a simple way to extract information from the RouteContext object.
  • The RouteParameter class, which makes it easier to pass data contained in a RouteContext object to a query for a data source control (similar to FormParameter).

After creating a public class for the routing object, .NET 4’s “MapPageRoute” method simplifies

the syntax back down to a one-line statement:

RouteTable.Routes.Add("SearchRoute", new Route("search/{searchterm}", new PageRouteHandler("/search.aspx")));

This example sets up a route mapping an SEO-friendly search URL (e.g.: “mySite.com/search/widget”) to a physical page, and sets the search term as the first parameter. On your search page, you can capture the requested product in a single line as well, either in your code-behind:

string searchterm = Page.RouteData.Values["searchterm"] as string; label1.Text = searchterm;

or in the page directly:

<asp:Label ID="Label1" runat="server" Text="<%$RouteValue:SearchTerm%>" />

With the new routing syntax, friendly URLs have never been easier!

Tags:

SEO

Google's 200 Search Engine Ranking Factors

by Andrew Kagan 9. October 2010 06:57

Google has mentioned in the past that it has 200 separate ranking factors for evaluating the relevance of a web page to a keyword (the "Google Algorithm"). But SEOs trying to reverse engineer ranking factors is like trying to peek at the man behind the curtain with tweezers...through multi-variate testing and other empiric methods, we can make changes to websites and monitor the search results, but only broadly guess at which changes carry the most weight or have any effect at all.

Darren Revell of Recruitwise Technology threw down a challenge on the Search Engine Land  discussion group on Linked In, challenging it's members to identify all 200 ranking factors. A heated discussion followed, which identified most, if not all, commonly accepted factors. After a week of discussion, Darren summarized the results:

1 Search terms in the HTML title tag? 

2 Search terms in the HTML body copy? 

3 Search terms in bold typeface? 

4 Search terms in header tags? 

5 Search term in anchor text in links to a page? 

6 PageRank of a page (the actual PageRank, not the toolbar PageRank)? 

7 The PageRank of the entire domain? 

8 Quality of link partners? 

9 Type of back links that bring anchor text juice for search terms? 

10 The speed of the web site 

11 Search terms in the URL - main URL and page URLs ? 

12 Search term density through body copy (About 3 - 5%?) ? 

13 Fresh content ? 

14 Good internal linking structure ? 

15 Age of the domain ? 

16 Links from good directories ? 

17 Image names ? 

18 Image ALTs ? 

19 Reputable hosting company 

20 Diversity of link partners 

21 Geo located results 

22 Rate of new inbound links to your site ? 

23 Relevance of inbound links - subject specific relationship with target page negative factors too: 

24 Pages 404s, 414s etc ? 

25 Duplicate title/keywords 

26 Participation in link schemes 

27 Search Terms the First Words of the Title Tag ? 

28 Search Terms in the Root Domain Name (searchterm.com) ? 

29 Search Terms in the Page Name URL (e.g. acme.co.uk/folder/searchterm.html) ? 

30 Search Terms in the Page Folder URL (e.g. acme.co.uk/searchterm/page.html) ? 

31 Search Terms in the First Words in the H1 Tag ? 

32 Search Terms in other Headline (H) Tags ? 

33 Search Terms in Internal Link Anchor Text on the Page ? 

34 Search Terms in External Link Anchor Text on the Page ? 

35 Search Terms in the First 50-100 Words in HTML on the Page 

36 Search Terms in the Page’s Query Parameters (e.g. acme.co.uk/page.html?searchterm) ? 

37 Search Terms in the Meta Description Tag 

38 Social graph fans 

39 Social graph fans earned impressions 

40 Social graph fans earned impressions with links 

41 Secondary fan connection citations earned impressions 

42 Otherme citation (social media linking) ? 

43 Rich snippet geo-reference ?

44 Rich snippet UGC rating ? 

45 Placement of backlinks in page ? 

46 Quantity of backlinks ? 

47 Quantity of linking root domains ? 

48 Quality of linking root domains ? 

49 Link distance from higher authority sites ? 

50 Outgoing followed links from back linked pages

51 Country specific domain ? 

52 Domain classification of linking domains ? 

53 Domain sculpting 

54 Redirect permanent (not 302) ? 

55 Page accessible ? 

56 Sitemap_Index/Sitemap limit 10K ?

57 Sitemap folder geotargeting 

58 Index/Follow ??And the more controversial ? 

59 Bounce rate (personalization) ? 

60 Visits (personalization) ? 

61 Visits (scraped from Alexa) ? 

62 Semantic relevance (synonym for matching term) ? 

63 Reputation/advocacy (positive chatter) 

64 URL length ? 

65 Frequency of Updates ? 

66 Domain Name Extension Top Level Domain - TLD ??# Some -Ve factors, as: ? 

67 Link to a bad neighborhood ? 

68 Redirect thru refresh metatags ? 

69 Poison words ? 

70 Keyword stuffing threshold ? 

71 Keyword dilution ? 

72 Dynamic Pages ? 

73 Use of Frames ? 

74 Gateway, doorway page 

75 Keyword saturation (Saturation levels do play a crucial role in the ranking of your pages) ? 

76 Traffic buying (effect adversely) 

77 Link buying (effect adversely) 

78 Over optimization - There is a penalty for over-compliance with well-established, accepted web optimization practices. ? 

79 Excessive cross linking (effect adversely) ? 

80 Linking between all the domains hosted on same IP (This one may be debatable) ? 

81 Hidden content (effect adversely) ? 

82 Cloaking (effect adversely) ? 

83 Excessive use of graphics (effect adversely) ? 

84 JavaScript (effect adversely) ? 

85 Comment spamming (effect adversely) ? 

86 Title attribute of link ? 

87 Sitemaps: XML, Text, HTML (XML Sitemap (Aids the crawler but doesn’t help rankings) ? 

88 W3C compliant html coding 

89 Duplicate content on site (effect adversely) ? 

90 Duplicate tags on site (effect adversely) ? 

91 Page file size/load time ? 

92 Number of links on page (too many will effect adversely) ? 

93 Video header and descriptions ? 

94 Video sitemap 

95 Quality content 

96 <noscript> tags (even though I don't know anyone who doesn't have JavaScript enabled) ? 

97 IP address range (many are blacklisted for spamming) ? 

98 Whether the site has been previously de-indexed due to malpractice 

99 Relevance of title tag to page content ? 

100 Relevance of META Description to page content

101 Code-to-text ratio 

102 Canonical URL 

103 Directory depth 

104 Querystring param count 

105 An active adsense campaign. We have noticed our page rank higher when we are also running an active adsense campaign. 

106 Server calls, Images, JavaScript, Database calls (affects speed of website) 

107 keyword spamming 

108 multiple domains to same website 

109 link structure - do you link to '/', 'index.htm' 

110 SERPs 

111 Quality & Number of Blogs 

112 Authority ranking of subject matter 

113 Link attributes - like rel=nofollow 

114 Use of WebmasterTools 

115 Popularity (no one seems to have mentioned this one before?) 

116 Brand recognition 

117 Linear Distribution of Search Terms on the html 

118 IP address link to be varied, not from the same server which have lower link juice 

119 Standard Deviation of Search Terms in the Population of pages containing Search Terms 

120 URL shortener 

121 Snippet 

122 Microformats 

123 Mobile accessibility 

124 Synonyms , language, query terms 

125 Page category 

126 SERP click thru rate. Say your website ranked #1 for "bike shoes" keyword phrase but 60% off the traffic went to website in the #2, I guarantee you Google will notice and would make an adjustments. 

127 Relevance. (to searched phrase) 

128 Comprehensive. word-count and pages of content on topic to top keyword of entire site along with relevent named images, videos, news, with top keyword 

129 Fresh. Latest page updates with accurate sitemap so googlebot re-checks your site frequently. 

 

Domain / server factors 

130 Domain age; 

131 Length of domain registration; 

132 Domain registration information hidden/anonymous; 

133 Site top level domain (geographical focus, e.g. com versus co.uk); 

134 Site top level domain (e.g. .com versus .info); 

135 Sub domain or root domain? 

136 Domain past records (how often it changed IP); 

137 Domain past owners (how often the owner was changed) 

138 Keywords in the domain; 

139 Domain IP; 

140 Domain IP neighbors; 

141 Domain external mentions (non-linked) 

142 Geo-targeting settings in Google Webmaster Tools 

143 Domain registration with Google Webmaster Tools; 

144 Domain presence in Google News; 

145 Domain presence in Google Blog Search; 

146 Use of the domain in Google Analytics; 

147 Server geographical location; 

148 Server reliability / uptime 

 

Website elements: 

149 Page internal popularity (how many internal links it has); 

150 Page external popularity (how many external links it has relevant to other pages of this site); 

 

Visits: 

151 Number of visits; 

152 Visitors’ demographics; 

153 Bounce rate; 

154 Visitors’ browsing habits (which other sites they tend to visit) 

155 Visiting trends and patterns (like sudden spiked in incoming traffic. 

156 How often the listing is clicked within the SERPs (relevant to other listings) 

157 Use of Google Check out on your site 

158 Domain name is one of the important factors that give good page ranking in that particular sector. 

159 Compression for size by eliminating white space, using shorthand notation, and combining multiple CSS files where appropriate. GZIP can be used. 

160 You can use CSS sprites to help to consolidate decorative images. 

161 No redirection to other URLS in the same server through flash banner images

[Factors marked with a "?" are subject to dispute as to their importance]

The sheer number of identified factors is overwhelming at first, but also shows how interrelated many ranking factors are. Improving your title tags, for example, will address many individual ranking factors at once, as would targeting inbound links from websites with good reputation and relevant content to your own website. Publishing content for syndication, when done properly, will create many positive factors as well. And of course there are negative factors to avoid, which would reduce relevance.

All in all we came up with approximately 80% of Google's ranking factors...but like the recipe for Coca Cola, the rest are some of the search industry's most tightly guarded secrets. And one can assume many unidentified factors are weightings between groups of individual factors. Will we ever know Google's secrets? All we can continue to do is test and measure, and slowly build up the empirical evidence that will point to the rest.

Thanks everyone for the inspiration and collaboration...knowledge is search-rank power!

Tags:

SEO

SEO Advice at Google I/O reveals webmaster weaknesses

by Andrew Kagan 8. June 2010 10:48

 

Google’s Matt Cutts posted an hour-long video from Google I/O 2010, where he and three SEO experts performed live reviews of websites submitted by webmasters. What was striking was how poorly many websites have been optimized, when the basic rules are public and easy to meet.

The first website “Phoenician Stone”, a manufacturer of stone mantles, tiles, etc., had no text on the homepage at all, with a poorly descriptive two-word title tag (“Phonecian Stone”). The only significant amount of text was in the meta keywords tag, about whch Matt made sure to mention “Google doesn’t index that text”. He went on to emphasize “We [Google] don’t trust the meta keywords tag”.

SEO Tips To Take To Heart

Tip #1—Put text on your page

Tip #2—Think about what users will type when searching for your services, and put those words on the page.

Cutts recommended using any free keyword research tool to find actual search phrases people use on search engines.

The second example was “rodsbot.com”…as Matt noted, the domain name is not particularly descriptive or intuitive (it displays weird Google Earth images). Like the first website, there was virtually no text on the homepage, but since this was a “community” site where individuals posted images, an easy way to generate lots of search-relevant text would be to include users reviews and comments. “Why do the work when you can get someone else to do the work for you, right?” mused Cutts, rhetorically. Another point Cutts made was that the owner of the website had 6 other websites, and clearly wasn’t devoting enough attention to each site for any of them to rank well.

What’s in a (domain) name?

The next site profiled was a news site about Google’s Android operating system called “androidandme.com”.  The homepage was top-loaded with ads and a large logo area, to the point that most of the actual content was rolled off the bottom of the screen. While search engines may return the website because the content’s there, the drop-out rate on the page will probably be higher than it should, because the content is too hard to find. On the positive side, the website was running on the latest version of Wordpress, and was configured to use descriptive names in the URLs.

But how do you differentiate your website from others covering the same industry or products? Cutts pointed out that branding “outside the box” would help differentiate your website from the rest of the pack, using as an example the mobile phone website “Boy Genius Report”…the name has nothing to do with mobile phones per se, but it does have a lot of resonance with gadget-hungry geeks and nerds, and it certainly “stands out from the crowd”.

Mal-de-Ware

One of the sites submitted for review actually had been hacked with malware scripts, and the owner evidently was unaware of it. Vanessa Fox pointed out that if you register your website with Google’s Webmaster Central (and who doesn’t?), you will be notified of any malware detected on your website. Panelist Greg Grothaus added that Google has a new service called “Skipfish” that will allow you to test your development website for SEO and malware before you’re released your code to your live site.

The Mayday Update

Cutts admitted that the radical shift in search rankings around the beginning of May was a deliberate algorithmic change that is here to stay. The ranking algorithm shift caught many SEOs off guard and caused much misery as sites with long-time ranking saw huge shifts in their SERPs. Google claims this update will return “high quality sites” above sites they evaluate as having lower quality.

TLDs Don’t Matter (Maybe)

During the final open Q&A that closed the session, it was asked if the TLD (what your website ends with—.COM, .INFO, etc.) affected search rankings. Cutts was emphatic that there was no ranking preference based on TLD, although he added parenthetically that other websites tend to aggregate links to .GOV, .EDU, etc. Certain TLDs may have a bad reputation that might impact on click-through rates, so I would still recommend staying away from .BIZ, .INFO and other spam-centric TLDs.

The H1 Tag…Still No Consensus

Many SEOs insist that you have to use the <h1> tag on your pages for the headline content…Cutts mentioned that Google will index the page regardless, and that what’s more important is that the page validate as HTML. He did not say that leaving out the H1 tag will penalize your search rank, so use it if you want, but don’t obsess over it!

 

Tags: ,

SEO

Powered by BlogEngine.NET 3.3.0.0
Theme by Mads Kristensen updated by Search Partner Pro