Friday, June 5, 2015
AdWords MCCs Get Cross-Account Campaign Management & Reporting
Please visit Search Engine Land for the full article.
from Search Engine Land: News & Info About SEO, PPC, SEM, Search Engines & Search Marketing http://ift.tt/1ARWNYN
Paid & Organic Approaches To Dig Deeper With An SEO Keyword That’s Working
Please visit Search Engine Land for the full article.
from Search Engine Land: News & Info About SEO, PPC, SEM, Search Engines & Search Marketing http://ift.tt/1KQIYxb
Is Setting Google Crawl Rate To "Fast" Bad For SEO?
from Google SEO News and Discussion WebmasterWorld http://ift.tt/1T0LSSL
Should I Use Relative or Absolute URLs? - Whiteboard Friday
Posted by RuthBurrReedy
It was once commonplace for developers to code relative URLs into a site. There are a number of reasons why that might not be the best idea for SEO, and in today's Whiteboard Friday, Ruth Burr Reedy is here to tell you all about why.
Let's discuss some non-philosophical absolutes and relatives
Howdy, Moz fans. My name is Ruth Burr Reedy. You may recognize me from such projects as when I used to be the Head of SEO at Moz. I'm now the Senior SEO Manager at BigWing Interactive in Oklahoma City. Today we're going to talk about relative versus absolute URLs and why they are important.
At any given time, your website can have several different configurations that might be causing duplicate content issues. You could have just a standard http://www.example.com. That's a pretty standard format for a website.
But the main sources that we see of domain level duplicate content are when the non-www.example.com does not redirect to the www or vice-versa, and when the HTTPS versions of your URLs are not forced to resolve to HTTP versions or, again, vice-versa. What this can mean is if all of these scenarios are true, if all four of these URLs resolve without being forced to resolve to a canonical version, you can, in essence, have four versions of your website out on the Internet. This may or may not be a problem.
It's not ideal for a couple of reasons. Number one, duplicate content is a problem because some people think that duplicate content is going to give you a penalty. Duplicate content is not going to get your website penalized in the same way that you might see a spammy link penalty from Penguin. There's no actual penalty involved. You won't be punished for having duplicate content.
The problem with duplicate content is that you're basically relying on Google to figure out what the real version of your website is. Google is seeing the URL from all four versions of your website. They're going to try to figure out which URL is the real URL and just rank that one. The problem with that is you're basically leaving that decision up to Google when it's something that you could take control of for yourself.
There are a couple of other reasons that we'll go into a little bit later for why duplicate content can be a problem. But in short, duplicate content is no good.
However, just having these URLs not resolve to each other may or may not be a huge problem. When it really becomes a serious issue is when that problem is combined with injudicious use of relative URLs in internal links. So let's talk a little bit about the difference between a relative URL and an absolute URL when it comes to internal linking.
With an absolute URL, you are putting the entire web address of the page that you are linking to in the link. You're putting your full domain, everything in the link, including /page. That's an absolute URL.
However, when coding a website, it's a fairly common web development practice to instead code internal links with what's called a relative URL. A relative URL is just /page. Basically what that does is it relies on your browser to understand, "Okay, this link is pointing to a page that's on the same domain that we're already on. I'm just going to assume that that is the case and go there."
There are a couple of really good reasons to code relative URLs
1) It is much easier and faster to code.
When you are a web developer and you're building a site and there thousands of pages, coding relative versus absolute URLs is a way to be more efficient. You'll see it happen a lot.
2) Staging environments
Another reason why you might see relative versus absolute URLs is some content management systems -- and SharePoint is a great example of this -- have a staging environment that's on its own domain. Instead of being example.com, it will be examplestaging.com. The entire website will basically be replicated on that staging domain. Having relative versus absolute URLs means that the same website can exist on staging and on production, or the live accessible version of your website, without having to go back in and recode all of those URLs. Again, it's more efficient for your web development team. Those are really perfectly valid reasons to do those things. So don't yell at your web dev team if they've coded relative URLS, because from their perspective it is a better solution.
Relative URLs will also cause your page to load slightly faster. However, in my experience, the SEO benefits of having absolute versus relative URLs in your website far outweigh the teeny-tiny bit longer that it will take the page to load. It's very negligible. If you have a really, really long page load time, there's going to be a whole boatload of things that you can change that will make a bigger difference than coding your URLs as relative versus absolute.
Page load time, in my opinion, not a concern here. However, it is something that your web dev team may bring up with you when you try to address with them the fact that, from an SEO perspective, coding your website with relative versus absolute URLs, especially in the nav, is not a good solution.
There are even better reasons to use absolute URLs
1) Scrapers
If you have all of your internal links as relative URLs, it would be very, very, very easy for a scraper to simply scrape your whole website and put it up on a new domain, and the whole website would just work. That sucks for you, and it's great for that scraper. But unless you are out there doing public services for scrapers, for some reason, that's probably not something that you want happening with your beautiful, hardworking, handcrafted website. That's one reason. There is a scraper risk.
2) Preventing duplicate content issues
But the other reason why it's very important to have absolute versus relative URLs is that it really mitigates the duplicate content risk that can be presented when you don't have all of these versions of your website resolving to one version. Google could potentially enter your site on any one of these four pages, which they're the same page to you. They're four different pages to Google. They're the same domain to you. They are four different domains to Google.
But they could enter your site, and if all of your URLs are relative, they can then crawl and index your entire domain using whatever format these are. Whereas if you have absolute links coded, even if Google enters your site on www. and that resolves, once they crawl to another page, that you've got coded without the www., all of that other internal link juice and all of the other pages on your website, Google is not going to assume that those live at the www. version. That really cuts down on different versions of each page of your website. If you have relative URLs throughout, you basically have four different websites if you haven't fixed this problem.
Again, it's not always a huge issue. Duplicate content, it's not ideal. However, Google has gotten pretty good at figuring out what the real version of your website is.
You do want to think about internal linking, when you're thinking about this. If you have basically four different versions of any URL that anybody could just copy and paste when they want to link to you or when they want to share something that you've built, you're diluting your internal links by four, which is not great. You basically would have to build four times as many links in order to get the same authority. So that's one reason.
3) Crawl Budget
The other reason why it's pretty important not to do is because of crawl budget. I'm going to point it out like this instead.
When we talk about crawl budget, basically what that is, is every time Google crawls your website, there is a finite depth that they will. There's a finite number of URLs that they will crawl and then they decide, "Okay, I'm done." That's based on a few different things. Your site authority is one of them. Your actual PageRank, not toolbar PageRank, but how good Google actually thinks your website is, is a big part of that. But also how complex your site is, how often it's updated, things like that are also going to contribute to how often and how deep Google is going to crawl your site.
It's important to remember when we think about crawl budget that, for Google, crawl budget cost actual dollars. One of Google's biggest expenditures as a company is the money and the bandwidth it takes to crawl and index the Web. All of that energy that's going into crawling and indexing the Web, that lives on servers. That bandwidth comes from servers, and that means that using bandwidth cost Google actual real dollars.
So Google is incentivized to crawl as efficiently as possible, because when they crawl inefficiently, it cost them money. If your site is not efficient to crawl, Google is going to save itself some money by crawling it less frequently and crawling to a fewer number of pages per crawl. That can mean that if you have a site that's updated frequently, your site may not be updating in the index as frequently as you're updating it. It may also mean that Google, while it's crawling and indexing, may be crawling and indexing a version of your website that isn't the version that you really want it to crawl and index.
So having four different versions of your website, all of which are completely crawlable to the last page, because you've got relative URLs and you haven't fixed this duplicate content problem, means that Google has to spend four times as much money in order to really crawl and understand your website. Over time they're going to do that less and less frequently, especially if you don't have a really high authority website. If you're a small website, if you're just starting out, if you've only got a medium number of inbound links, over time you're going to see your crawl rate and frequency impacted, and that's bad. We don't want that. We want Google to come back all the time, see all our pages. They're beautiful. Put them up in the index. Rank them well. That's what we want. So that's what we should do.
There are couple of ways to fix your relative versus absolute URLs problem
1) Fix what is happening on the server side of your website
You have to make sure that you are forcing all of these different versions of your domain to resolve to one version of your domain. For me, I'm pretty agnostic as to which version you pick. You should probably already have a pretty good idea of which version of your website is the real version, whether that's www, non-www, HTTPS, or HTTP. From my view, what's most important is that all four of these versions resolve to one version.
From an SEO standpoint, there is evidence to suggest and Google has certainly said that HTTPS is a little bit better than HTTP. From a URL length perspective, I like to not have the www. in there because it doesn't really do anything. It just makes your URLs four characters longer. If you don't know which one to pick, I would pick one this one HTTPS, no W's. But whichever one you pick, what's really most important is that all of them resolve to one version. You can do that on the server side, and that's usually pretty easy for your dev team to fix once you tell them that it needs to happen.
2) Fix your internal links
Great. So you fixed it on your server side. Now you need to fix your internal links, and you need to recode them for being relative to being absolute. This is something that your dev team is not going to want to do because it is time consuming and, from a web dev perspective, not that important. However, you should use resources like this Whiteboard Friday to explain to them, from an SEO perspective, both from the scraper risk and from a duplicate content standpoint, having those absolute URLs is a high priority and something that should get done.
You'll need to fix those, especially in your navigational elements. But once you've got your nav fixed, also pull out your database or run a Screaming Frog crawl or however you want to discover internal links that aren't part of your nav, and make sure you're updating those to be absolute as well.
Then you'll do some education with everybody who touches your website saying, "Hey, when you link internally, make sure you're using the absolute URL and make sure it's in our preferred format," because that's really going to give you the most bang for your buck per internal link. So do some education. Fix your internal links.
Sometimes your dev team going to say, "No, we can't do that. We're not going to recode the whole nav. It's not a good use of our time," and sometimes they are right. The dev team has more important things to do. That's okay.
3) Canonicalize it!
If you can't get your internal links fixed or if they're not going to get fixed anytime in the near future, a stopgap or a Band-Aid that you can kind of put on this problem is to canonicalize all of your pages. As you're changing your server to force all of these different versions of your domain to resolve to one, at the same time you should be implementing the canonical tag on all of the pages of your website to self-canonize. On every page, you have a canonical page tag saying, "This page right here that they were already on is the canonical version of this page. " Or if there's another page that's the canonical version, then obviously you point to that instead.
But having each page self-canonicalize will mitigate both the risk of duplicate content internally and some of the risk posed by scrappers, because when they scrape, if they are scraping your website and slapping it up somewhere else, those canonical tags will often stay in place, and that lets Google know this is not the real version of the website.
In conclusion, relative links, not as good. Absolute links, those are the way to go. Make sure that you're fixing these very common domain level duplicate content problems. If your dev team tries to tell you that they don't want to do this, just tell them I sent you. Thanks guys.
Video transcription by Speechpad.com
Sign up for The Moz Top 10, a semimonthly mailer updating you on the top ten hottest pieces of SEO news, tips, and rad links uncovered by the Moz team. Think of it as your exclusive digest of stuff you don't have time to hunt down but want to read!
from Moz Blog http://ift.tt/1JqDdpB
SearchCap: Goodbye Yahoo Maps, Google Search Shows iOS Apps & Touch To Search
Please visit Search Engine Land for the full article.
from Search Engine Land: News & Info About SEO, PPC, SEM, Search Engines & Search Marketing http://ift.tt/1JphHBy
Google Says It May Unverify Inactive Local Business Listings
Please visit Search Engine Land for the full article.
from Search Engine Land: News & Info About SEO, PPC, SEM, Search Engines & Search Marketing http://ift.tt/1FXgD3X
Mobilegeddon Revisited At SMX Advanced
Please visit Search Engine Land for the full article.
from Search Engine Land: News & Info About SEO, PPC, SEM, Search Engines & Search Marketing http://ift.tt/1HNJbel