from Google SEO News and Discussion WebmasterWorld http://ift.tt/1nEEpYI
Tuesday, March 11, 2014
Is a javascript widget that links to advertiser news feed a paid link?
from Google SEO News and Discussion WebmasterWorld http://ift.tt/1nEEpYI
12 Ways to Increase Traffic From Google Without Building Links
Posted by Cyrus-Shepard
Link building is hard, but it's not the only way to make traffic gains in Google's search results.
When I first started SEO, building links wasn't my strong suit. Writing outreach emails terrified me, and I had little experience creating killer content. Instead, I focused on the easy wins.
While off-page factors like links typically weigh more heavily than on-page efforts in Google's search results, SEOs today have a number of levers to pull in order to gain increased search traffic without ever building a link.
For experienced SEOs, many of these are established practices, but even the most optimized sites can improve in at least one or more of these areas.
1. In-depth articles
According to the MozCast Feature Graph, 6% of Google search results contain In-depth articles. While this doesn't seem like a huge numbers, the articles that qualify can see a significant increase in traffic. Anecdotally, we've heard reports of traffic increasing up to 10% after inclusion.
By adding a few signals to your HTML, your high quality content could qualify to appear. The markup suggested by Google includes:
- Schema.org Article markup â NewsArticle works too)
- Google+ Authorship
- Pagination and canonicalization best practices
- Logo markup
- First click free â for paywall content
While Google seems to favor authorities news sites for In-depth Article inclusion, most sites that may qualify don't have the proper semantic markup implemented.
2. Improving user satisfaction
Can you improve your Google rankings by improving the onsite experience of your visitors?
In many ways the answer is "yes," and the experience of several SEOs hints that the effect may be larger than we realize.
We know that Google's Panda algorithm punishes "low-quality" websites. We also know that Google likely measures satisfaction as users click on search results.
"⦠Google could see how satisfied users were. ⦠The best sign of their happiness was the "long click" â this occurred when someone went to a search result, ideally the top one, and did not return."
-Stephen Levy from his excellent book In the Plex
The idea is called pogosticking, or return-to-SERP, and if you can reduce it by keeping satisfied visitors on your site (or at least not returning to Google to look for the answer somewhere else) many SEOs believe Google will reward you with higher positions in search results.
Tim Grice of Branded3 reports a saying they have at their SEO agency:
"If you have enough links to be in the top 5, you have enough links to be position 1â³
While we have no direct evidence of pogosticking in Google's search results, we've seen enough patents, interviews and analysis to believe it's possibly one of the most underutilized techniques in SEO today.
3. Rich snippets from structured data
Google constantly expands the types of rich snippets it shows in search results, including events, songs, videos and breadcrumbs.
The first time I heard about structured data was from a presentation by Matthew Brown at MozCon in 2011. Matthew now works at Moz, and I'm happy to glean from his expertise. His Schema 101 presentation below is well worth studying.
If you're just getting started, check out this amazingly helpful Guide to Generating Rich Snippets from the folks at SEOgadget.
Two of our favorite types of markup for increasing clicks are videos and authorship, so we'll discuss each below.
4. Video optimization
Pixel for pixel, video snippets capture more search real estate than any other type of rich snippet, even more than authorship photos. Studies show our eyes go straight to them.
Eye-Tracking Google SERPs - 5 Tales of Pizza
Unlike author photos, video snippets are often easier to display and don't require connecting a Google+ account.
Video snippets generally require creating a video XML sitemap and adding schema.org video markup.
To simplify things, many third party services will take care of the technical details for you. Here at Moz we use Wistia, which creates a sitemap and adds schema.org markup automatically.
Pro tip: Both schema.org and XML sitemaps allow you to define the video thumbnail that appears in search results. As the thumbnail highly influences clicks, choose wisely.
Recommended reading: Getting Video Results in Google
5. Google authorship
Scoring the coveted author photo in Google search results doesn't guarantee more clicks, but getting the right photo can help your click-through rate in many results.
What makes a good author photo? While there are no rules, I've personally tested and studied hundreds of photos and found certain factors help:
- Use a real face, not a company logo, cartoon or icon
- High contrast colors. Because the photo is small, you want it to stand out with good separation between the background and foreground.
- Audience targeted. For example, young Disney fans are probably less likely to click on an old guy in a suit who looks like a financial adviser.
Google recently got more selective about the author photos it chooses to show, but if you implement authorship correctly you may find yourself in the 20% (according to MozCast) of all search results that include author photos.
6. Improving site speed
Improving site speed not only improves visitor satisfaction (see point #1) but it may also have a direct influence on your search rankings. In fact, site speed is one of the few ranking factors Google has confirmed.
One of the interesting things we learned this year, with help from the folks at Zoompf, is that actual page load speed may be far less important than Time to First Byte (TTFB). TTFB is the amount of time it takes a server to first respond to a request.
As important as page speed is for desktop search Google considers it even more important for mobile devices. Think about the last time you waited for a page to load on your cell phone with a weak signal.
"Optimizing a page's loading time on smartphones is particularly important given the characteristics of mobile data networks smartphones are connected to."
- Google Developers
Suggested tool: PageSpeed Insights
7. Smartphone SEO
Aside from speed, if your website isn't configured properly for smartphones, it probably results in lower Google search results for mobile queries. Google confirms that smartphone errors may result in lower mobile rankings.
What is a smartphone error? It could include:
- Redirecting visitors to the wrong mobile URL
- Embedding a video that doesn't play on a particular phone (Flash video on an iPhone, for example)
- Pop-ups that aren't easily closed on mobile
- Buttons or fonts that are too small on a mobile device
Google recommends making your site responsive, but many of the top brands in the world, including Apple.com, don't have responsive sites. Regardless, a good mobile experience is imperative.
8. Expanding your international audience
Does your website have traffic potential outside your existing country and/or language?
Our international experts like Aleyda Solis know this well, but folks inside the United States have been slow to target specific languages and countries with SEO.
Oftentimes, the opportunities for appearing in international search results are greater than staying within your own borders, and the competition sometimes less. To see if it's worth your while to make an investment, check out this International SEO Checklist by Aleyda (who is also a mobile SEO expertâit's so unfair!)
9. Social annotations with Google+
When you share content on Facebook and Twitter, your network basically sees it only when they are looking at Facebook and Twitter.
On the other hand, when you share content on Google+, your network can see it every time they search Google.
Google's own research shows that users fixate on social annotations, even when presented with videos and other types of rich snippets.
The easiest way to take advantage of this is to expand your Google+ network and share good content regularly and often. Rand Fishkin elegantly explains how to use Google+ to appear in the top of Google results every time.
Additionally, content shared through Google+ often ranks in regular search results, visible to everyone on the web, regardless of their social connections.
10. Snippet optimization
This goes back to basic meta tag and title tag optimization, but it's a good practice to keep in mind.
In the past two years, Google changed the maximum length of title tags so that it's no longer dependent on the number of characters, but on the number of pixels used, generally around 500 pixels in length. This keeps changing as Google tests new layouts.
Because 500 pixels is difficult to determine when writing most titles, best advice is still to keep your titles between 60-80 characters, or use an online snippet optimization tool to find your ideal title tag length.
Google also updated its advice on meta descriptions, further clarifying that duplicate meta descriptions are not a good idea. Matt Cutts tells us that if you can't make your descriptions unique for each page, it's better to have none at all.
"You can either have a unique meta tag description, or you can choose to have no meta tag description."
Google's Matt Cutts
Given that duplicate meta descriptions are one of the few HTML recommendations flags in Webmaster Tools, does this indicate Google treats repetitive meta descriptions as a negative ranking factor? Hmmmâ¦.
11. Updating fresh content
Websites that stop earning new links often lose ground in Google search results. At the same time, sites that never add new content or let their pages go stale can also fall out of favor.
Freshening your content doesn't guarantee a rankings boost, but for certain types of queries it definitely helps. Google scores freshness in different ways, and may include:
- Inception date
- The amount (%) your content changes
- How often you update your content
- How many new pages you create over time
- Changes to important content (homepage text) vs. unimportant content (footer links)
Recommended reading: 10 Illustrations on How Fresh Content Can Influence Rankings
12. Ongoing on-page SEO
The factors listed here only scratch the surface of earning more real estate in search results. Issues such as indexing, crawling, canonicalization, duplicate content, site architecture, keyword research, internal linking, image optimization and 1,000 other things can move ranking mountains.
The job of the Technical SEO becomes more complex each year, but we also have more opportunities now than ever.
It's easy to think nothing is new in SEO, or that SEO is easy, or that Google will simply figure out our sites. Nothing is further from reality.
The truth is, we have work to do.
Sign up for The Moz Top 10, a semimonthly mailer updating you on the top ten hottest pieces of SEO news, tips, and rad links uncovered by the Moz team. Think of it as your exclusive digest of stuff you don't have time to hunt down but want to read!
from Moz Blog http://ift.tt/1nfomno
WWW vs. non-WWW problem -- I'm confused and scared.
from Google SEO News and Discussion WebmasterWorld http://ift.tt/1neT4gG
SearchCap: The Day In Search, March 10, 2014
Please visit Search Engine Land for the full article.
from Search Engine Land: News & Info About SEO, PPC, SEM, Search Engines & Search Marketing http://ift.tt/1fl2Bxr
Monday, March 10, 2014
Running A/B multi testing in Google Analytics - issues?
from Google SEO News and Discussion WebmasterWorld http://ift.tt/1h6A1xw
How to Set Up Meaningful (Non-Arbitrary) Custom Attribution in Google Analytics
Posted by Tom.Capper
Attribution modeling in Google Analytics (GA) is potentially very powerful in the results it can give us, yet few people use it, and those that do often get misleading results. The built-in models are all fairly useless, and creating your own custom model can easily dissolve into random guesswork. If youâre lucky enough to have access to GA Premium, you can use Data-Driven Attribution, and thatâs greatâbut if you haven't got the budget to take that route, this post should show you how to get started with the data you already have.
If you've read up on attribution modelling in the past, you probably already know whatâs wrong with the default models. If you havenât, I recommend you read this post by Avinash, which outlines the basics of how they all work.
In short, theyâre all based on arbitrary, oversimplified assumptions about how people use the internet.
The time decay model
The time decay model is probably the most sensible out of the box, and assumes that after I visit your site, the effect of this first visit on the chance of me visiting again halves every X days. The below graph shows this relationship with the default seven-day half-life. It plots "days since visit" against "chance this visit will cause additional visit." If it takes seven days for the repeat visit to come around, the first visit's credit halves to 25%. If it takes 14 days for the repeat visit to come around, the first visit's credit halves again, to 12.5%. Note that the graph is steppedâI'm assuming it uses GA's "days since last visit" dimension, which rounds to a whole number of days. This would mean that, for example, if both visits were on the day of conversion, neither would be discounted and both would get equal credit.
There might be some site and userbase out there for which this is an accurate model, but as a starting assumption itâs incredibly bold. As an entire model, itâs incredibly simplisticâsurely we donât really believe that there are no factors relevant in assigning credit to previous visits besides how long ago they occurred? We might consider it relevant if the previous visit bounced, for example. This is why custom models are the only sensible approach to attribution modelling in Google Analyticsâthe simple one-size-fits-all models are never going to be appropriate for your business or client, precisely because theyâre simple, one-size-fits-all models.
Note that in describing the time decay model, Iâm talking about the chance of one visit generating anotherâan important and often overlooked aspect of attribution modelling is that itâs about probabilities. When assigning partial credit for a conversion to a previous visit, we are not saying that the conversion happened partly because of the previous visit, and partly because of the converting visit. We simply donât know whether that was the case. It could be that after their first visit, the user decided that whatever happened they were going to come back at some point and make a purchase. If we knew this, weâd want to assign that first visit 100% credit. Or it might be that after their first visit, the user totally forgot that our website existed, and then by pure coincidence found it in their natural search results a few days later and decided to make a purchase. In this case, if we knew this, weâd want to assign the previous visit 0% credit. But actually, we donât know what happened. So we make a claim based on probabilities. For example, if we have a conversion that takes place with one previous visit, what weâre saying if we assign 40% credit to that previous visit is that we think that there is a 40% chance that the conversion would not have happened without the first visit.
If we did think that there was a 40% chance of a conversion being caused by an initial visit, weâd want to assign 40% credit to âPosition in Pathâ exactly matching âFirst interactionâ (meaning visits that were the user's first visit). If you want to use âPosition in Pathâ as your sole predictor of the chance that a visit generated the conversion, you can. Provided you donât pull the percentages off the top of your head, itâs better than nothing. If you want to be more accurate, thereâs a veritable smorgasbord of additional custom credit rules to choose from, with any default model as your starting point. All we have to do now is figure out what numbers to put in, and realistically, this is where it gets hard. At all costs, do not be tempted to guessâthat renders the entire exercise pointless.
Tested assumptions
One tempting approach is simply to create a model based to a greater or lesser extent on assumptions and guesswork, then test the conclusions of that model against your existing marketing strategy and incrementally improve your strategy in this manner. This approach is probably better than nothing for improving your market strategy, and testing improvements to your strategy is always worthwhile, but as a way of creating a realistic attribution model this starting point is going to set you on a long, expensive journey.
The ideal solution is to do this process in reverseârun controlled experiments to build your model in the first place. If you can split your users into representative segments, then test, for example,
- the effect of a previous visit on the chance of a second visit
- the effect of a previous non-bounce visit on the chance of a second visit
- the effect of a previous organic search visit on the chance of a second visit
and so on, you can start filling in your custom credit rules this way. If your tests are done well, you can get really excellent results. But this is expensive, difficult, and time consuming.
The next-best alternative is asking users. If users donât remember having encountered your brand before, that previous visit they had probably didnât contribute to their conversion. The most sensible way to do this would be an (optional but incentivised) post-conversion questionnaire, where a representative sample of users are asked questions like:
- How did you find this site today?
- Have you visited this site before?
- If yes:
- How many times?
- How did you find it?
- Did this previous visit impact your decision to visit today?
- How long ago was your most recent visit?
The results from questions like these can start filling in those custom credit rules in a non-arbitrary way. But this is still somewhat expensive, difficult and time-consuming. What if you just want to get going right away?
Deconstructing the Data-Driven Attribution model
In this blog post, Google offers this explanation of the Data-Driven Attribution model in GA Premium:
âThe Data-Driven Attribution model is enabled through comparing conversion path structures and the associated likelihood of conversion given a certain order of events. The difference in path structure, and the associated difference in conversion probability, are the foundation for the algorithm which computes the channel weights. The more impact the presence of a certain marketing channel has on the conversion probability, the higher the weight of this channel in the attribution model.The underlying probability model has been shown to predict conversion significantly better than a last-click methodology. Data-Driven Attribution seeks to best represent the actual behaviour of customers in the real world, but is an estimate that should be validated as much as possible using controlled experimentation.â (my emphasis)
Similarly, this paper recommends a combination of a conditional probability approach and a bagged logistic regression model. Don't worry if this doesn't mean much to youâIâm going to recommend here using a variant of the much simpler conditional probability method.
I'd like to look first at the kind of model that seems to be suggested by Google's explanation above of their Data Driven Attribution feature. For example, say we wanted to look at the most basic credit rule: How much credit should be assigned to a single previous visit? The basic logic outlined in the explanation from Google above would suggest an approach something like this:
- Find conversion rate of new visitors (letâs say this is 4%)
- Find conversion rate of returning visitors with one previous visit (letâs say this is 7%)
- Credit for previous visit = ((7-4)/7) = 43%
To me, this model is somewhat flawed (though Iâm fairly sure that this flaw lies in my application of Googleâs explanation of their Data-Driven Attribution rather than in the model itself). For example, say we had a large group of repeat visitors who were only coming to the site because of a previous visit, but that were converting poorly. Weâd want to assign credit for these (few) conversions to the previous visits, but the model outlined above might assign them low or negative credit; this is because even though conversions among this group are caused by previous visits, their conversion rate is lower than that of new visitors. This is just one example of why this model can end up being misleading.
My best solution
Figuring out from our data whether a repeat visitor came because of a previous visit or independently of a previous visit is hard. Iâll be honest: I donât know how Google does it. My best solution is an approximation, but a non-arbitrary one. The idea is using the percentage of traffic that is either branded or direct as an indicator for brand familiarity. Going back again to how much credit should be assigned to a single previous visit, my solution looks like this:
- Calculate the percentage of your new visitor traffic is direct, branded organic or branded PPC (letâs say itâs 50%)
- Note: Obviously most of your organic is (not provided), so I recommend multiplying your total organic traffic by the % of your known keyword traffic that is branded. As (not provided) approaches 100%, youâll have to use PPC data to approximate your branded organic traffic levels.
- Calculate the percentage of your 2nd-time-visitor traffic is direct, branded organic or branded PPC (letâs say itâs 55%)
- Based on the knowledge that only 50% (in this case) of people without previous visits use branded/direct, approximate that without their first visit weâd only have seen (100%-55%)*(100/50)=90% of these 2nd time visitors.
- Given this, 10% of visitors came because of a previous visit, so we should assign 10% credit for 2nd time visits to the first visit.
We can use similar logic applied to users with 3+ visits to calculate the credit deserved by âmiddle interactionsâ.
This method is far from perfectâthatâs why I recommended two others above it. But if you want to get started with your existing data in a non-arbitrary way, I think this is a non-ridiculous way to get started. If youâve made it this far and you have any ideas of your own, please post them in the comments below.
Sign up for The Moz Top 10, a semimonthly mailer updating you on the top ten hottest pieces of SEO news, tips, and rad links uncovered by the Moz team. Think of it as your exclusive digest of stuff you don't have time to hunt down but want to read!
from Moz Blog http://ift.tt/1i3lDsC
Using Google Sitemaps To Find Panda-Hit pages?
from Google SEO News and Discussion WebmasterWorld http://ift.tt/1aPg2Ph