Tuesday, October 9, 2018
How Search Engine Land is changing to support you better
Please visit Search Engine Land for the full article.
from Search Engine Land: News & Info About SEO, PPC, SEM, Search Engines & Search Marketing https://ift.tt/2A0zcJf
The SEO Cyborg: How to Resonate with Users & Make Sense to Search Bots
Posted by alexis-sanders
SEO is about understanding how search bots and users react to an online experience. As search professionals, we’re required to bridge gaps between online experiences, search engine bots, and users. We need to know where to insert ourselves (or our teams) to ensure the best experience for both users and bots. In other words, we strive for experiences that resonate with humans and make sense to search engine bots.
This article seeks to answer the following questions:
- How do we drive sustainable growth for our clients?
- What are the building blocks of an organic search strategy?
What is the SEO cyborg?
A cyborg (or cybernetic organism) is defined as “a being with both organic and
biomechatronic body parts, whose physical abilities are extended beyond normal human limitations by mechanical elements.”
With the ability to relate between humans, search bots, and our site experiences, the SEO cyborg is an SEO (or team) that is able to work seamlessly between both technical and content initiatives (whose skills are extended beyond normal human limitations) to support driving of organic search performance. An SEO cyborg is able to strategically pinpoint where to place organic search efforts to maximize performance.
So, how do we do this?
The SEO model
Like so many classic triads (think: primary colors, the Three Musketeers, Destiny’s Child [the canonical version, of course]) the traditional SEO model, known as the crawl-index-rank method, packages SEO into three distinct steps. At the same time, however, this model fails to capture the breadth of work that we SEOs are expected to do on a daily basis, and not having a functioning model can be limiting. We need to expand this model without reinventing the wheel.
The enhanced model involves adding in a rendering, signaling, and connection phase.
You might be wondering, why do we need these?:
- Rendering: There is increased prevalence of JavaScript, CSS, imagery, and personalization.
- Signaling: HTML <link> tags, status codes, and even GSC signals are powerful indicators that tell search engines how to process and understand the page, determine its intent, and ultimately rank it. In the previous model, it didn’t feel as if these powerful elements really had a place.
- Connecting: People are a critical component of search. The ultimate goal of search engines is to identify and rank content that resonates with people. In the previous model, “rank” felt cold, hierarchical, and indifferent towards the end user.
All of this brings us to the question: how do we find success in each stage of this model?
Note: When using this piece, I recommend skimming ahead and leveraging those sections of the enhanced model that are most applicable to your business’ current search program.
The enhanced SEO model
Crawling
Technical SEO starts with the search engine’s ability to find a site’s webpages (hopefully efficiently).
Finding pages
Initially finding pages can happen a few ways, via:
- Links (internal or external)
- Redirected pages
- Sitemaps (XML, RSS 2.0, Atom 1.0, or .txt)
Side note: This information (although at first pretty straightforward) can be really useful. For example, if you’re seeing weird pages popping up in site crawls or performing in search, try checking:
- Backlink reports
- Internal links to URL
- Redirected into URL
Obtaining resources
The second component of crawling relates to the ability to obtain resources (which later becomes critical for rendering a page’s experience).
This typically relates to two elements:
- Appropriate robots.txt declarations
- Proper HTTP status code (namely 200 HTTP status codes)
Crawl efficiency
Finally, there’s the idea of how efficiently a search engine bot can traverse your site’s most critical experiences.
Action items:
- Is site’s main navigation simple, clear, and useful?
- Are there relevant on-page links?
- Is internal linking clear and crawlable (i.e., <a href="/">)?
- Is an HTML sitemap available?
- Side note: Make sure to check the HTML sitemap’s next page flow (or behavior flow reports) to find where those users are going. This may help to inform the main navigation.
- Do footer links contain tertiary content?
- Are important pages close to root?
- Are there no crawl traps?
- Are there no orphan pages?
- Are pages consolidated?
- Do all pages have purpose?
- Has duplicate content been resolved?
- Have redirects been consolidated?
- Are canonical tags on point?
- Are parameters well defined?
Information architecture
The organization of information extends past the bots, requiring an in-depth understanding of how users engage with a site.
Some seed questions to begin research include:
- What trends appear in search volume (by location, device)? What are common questions users have?
- Which pages get the most traffic?
- What are common user journeys?
- What are users’ traffic behaviors and flow?
- How do users leverage site features (e.g., internal site search)?
Rendering
Rendering a page relates to search engines’ ability to capture the page’s desired essence.
JavaScript
The big kahuna in the rendering section is JavaScript. For Google, rendering of JavaScript occurs during a second wave of indexing and the content is queued and rendered as resources become available.
As an SEO, it’s critical that we be able to answer the question — are search engines rendering my content?
Action items:
- Are direct “quotes” from content indexed?
- Is the site using <a href="/"> links (not onclick();)?
- Is the same content being served to search engine bots (user-agent)?
- Is the content present within the DOM?
- What does Google’s Mobile-Friendly Testing Tool’s JavaScript console (click “view details”) say?
Infinite scroll and lazy loading
Another hot topic relating to JavaScript is infinite scroll (and lazy load for imagery). Since search engine bots are lazy users, they won’t scroll to attain content.
Action items:
Ask ourselves – should all of the content really be indexed? Is it content that provides value to users?
- Infinite scroll: a user experience (and occasionally a performance optimizing) tactic to load content when the user hits a certain point in the UI; typically the content is exhaustive.
Solution one (updating AJAX):
1. Break out content into separate sections
- Note: The breakout of pages can be /page-1, /page-2, etc.; however, it would be best to delineate meaningful divides (e.g., /voltron, /optimus-prime, etc.)
2. Implement History API (pushState(), replaceState()) to update URLs as a user scrolls (i.e., push/update the URL into the URL bar)
3. Add the <link> tag’s rel="next" and rel="prev" on relevant page
Solution two (create a view-all page)
Note: This is not recommended for large amounts of content.
1. If it’s possible (i.e., there’s not a ton of content within the infinite scroll), create one page encompassing all content
2. Site latency/page load should be considered
- Lazy load imagery is a web performance optimization tactic, in which images loads upon a user scrolling (the idea is to save time, downloading images only when they’re needed)
- Add <img> tags in <noscript> tags
- Use JSON-LD structured data
- Schema.org "image" attributes nested in appropriate item types
- Schema.org ImageObject item type
CSS
I only have a few elements relating to the rendering of CSS.
Action items:
- CSS background images not picked up in image search, so don’t count on for important imagery
- CSS animations not interpreted, so make sure to add surrounding textual content
- Layouts for page are important (use responsive mobile layouts; avoid excessive ads)
Personalization
Although a trend in the broader digital exists to create 1:1, people-based marketing, Google doesn’t save cookies across sessions and thus will not interpret personalization based on cookies, meaning there must be an average, base-user, default experience. The data from other digital channels can be exceptionally useful when building out audience segments and gaining a deeper understanding of the base-user.
Action item:
- Ensure there is a base-user, unauthenticated, default experience
Technology
Google’s rendering engine is leveraging Chrome 41. Canary (Chrome’s testing browser) is currently operating on Chrome 69. Using CanIUse.com, we can infer that this affects Google’s abilities relating to HTTP/2, service workers (think: PWAs), certain JavaScript, specific advanced image formats, resource hints, and new encoding methods. That said, this does not mean we shouldn’t progress our sites and experiences for users — we just must ensure that we use progressive development (i.e., there’s a fallback for less advanced browsers [and Google too ☺]).
Action items:
- Ensure there's a fallback for less advanced browsers
Indexing
Getting pages into Google’s databases is what indexing is all about. From what I’ve experienced, this process is straightforward for most sites.
Action items:
- Ensure URLs are able to be crawled and rendered
- Ensure nothing is preventing indexing (e.g., robots meta tag)
- Submit sitemap in Google Search Console
- Fetch as Google in Google Search Console
Signaling
A site should strive to send clear signals to search engines. Unnecessarily confusing search engines can significantly impact a site’s performance. Signaling relates to suggesting best representation and status of a page. All this means is that we’re ensuring the following elements are sending appropriate signals.
Action items:
- <link> tag: This represents the relationship between documents in HTML.
- Rel="canonical": This represents appreciably similar content.
- Are canonicals a secondary solution to 301-redirecting experiences?
- Are canonicals pointing to end-state URLs?
- Is the content appreciably similar?
- Since Google maintains prerogative over determining end-state URL, it’s important that the canonical tags represent duplicates (and/or duplicate content).
- Are all canonicals in HTML?
- Presumably Google prefers canonical tags in the HTML. Although there have been some studies that show that Google can pick up JavaScript canonical tags, from my personal studies it takes significantly longer and is spottier.
- Is there safeguarding against incorrect canonical tags?
- Rel="next" and rel="prev": These represent a collective series and are not considered duplicate content, which means that all URLs can be indexed. That said, typically the first page in the chain is the most authoritative, so usually it will be the one to rank.
- Rel="alternate"
- media: typically used for separate mobile experiences.
- hreflang: indicate appropriate language/country
- The hreflang is quite unforgiving and it’s very easy to make errors.
- Ensure the documentation is followed closely.
- Check GSC International Target reports to ensure tags are populating.
- Rel="canonical": This represents appreciably similar content.
- HTTP status codes can also be signals, particularly the 304, 404, 410, and 503 status codes.
- 304 – a valid page that simply hasn’t been modified
- 404 – file not found
- 410 – file not found (and it is gone, forever and always)
- 503 – server maintenance
- Google Search Console settings: Make sure the following reports are all sending clear signals. Occasionally Google decides to honor these signals.
- International Targeting
- URL Parameters
- Data Highlighter
- Remove URLs
- Sitemaps
Rank
Rank relates to how search engines arrange web experiences, stacking them against each other to see who ends up on top for each individual query (taking into account numerous data points surrounding the query).
Two critical questions recur often when understanding ranking pages:
- Does or could your page have the best response?
- Are you or could you become semantically known (on the Internet and in the minds of users) for the topics? (i.e., are you worthy of receiving links and people traversing the web to land on your experience?)
On-page optimizations
These are the elements webmasters control. Off-page is a critical component to achieving success in search; however, in an idyllic world, we shouldn’t have to worry about links and/or mentions – they should come naturally.
Action items:
- Textual content:
- Make content both people and bots can understand
- Answer questions directly
- Write short, logical, simple sentences
- Ensure subjects are clear (not to be inferred)
- Create scannable content (i.e., make sure <h#> tags are an outline, use bullets/lists, use tables, charts, and visuals to delineate content, etc.)
- Define any uncommon vocabulary or link to a glossary
- Multimedia (images, videos, engaging elements):
- Use imagery, videos, engaging content where applicable
- Ensure that image optimization best practices are followed
- If you’re looking for a comprehensive resource check out https://images.guide
- Meta elements (<title> tags, meta descriptions, OGP, Twitter cards, etc.)
- Structured data
- Schema.org (check out Google’s supported markup and TechnicalSEO.com’s markup helper tool)
- Use Accessible Rich Internet Applications (ARIA)
- Use semantic HTML (especially hierarchically organized, relevant <h#> tags and unordered and ordered lists (<ul>, <ol>))
- Is content accessible?
- Is there keyboard functionality?
- Are there text alternatives for non-text media? Example:
- Transcripts for audio
- Images with alt text
- In-text descriptions of visuals
- Is there adequate color contrast?
- Is text resizable?
Finding interesting content
Researching and identifying useful content happens in three formats:
- Keyword and search landscape research
- On-site analytic deep dives
- User research
Audience research
When looking for audiences, we need to concentrate high percentages (super high index rates are great, but not required). Push channels (particularly ones with strong targeting capabilities) do better with high index rates. This makes sense, we need to know that 80% of our customers have certain leanings (because we’re looking for base-case), not that five users over-index on a niche topic (these five niche-topic lovers are perfect for targeted ads).
Some seed research questions:
- Who are users?
- Where are they?
- Why do they buy?
- How do they buy?
- What do they want?
- Are they new or existing users?
- What do they value?
- What are their motivators?
- What is their relationship w/ tech?
- What do they do online?
- Are users engaging with other brands?
- Is there an opportunity for synergy?
- What can we borrow from other channels?
- Digital presents a wealth of data, in which 1:1, closed-loop, people-based marketing exists. Leverage any data you can get and find useful.
Content journey maps
All of this data can then go into creating a map of the user journey and overlaying relevant content. Below are a few types of mappings that are useful.
Illustrative user journey map
Sometimes when trying to process complex problems, it’s easier to break it down into smaller pieces. Illustrative user journeys can help with this problem! Take a single user’s journey and map it out, aligning relevant content experiences.
Funnel content mapping
This chart is deceptively simple; however, working through this graph can help sites to understand how each stage in the funnel affects users (note: the stages can be modified). This matrix can help with mapping who writers are talking to, their needs, and how to push them to the next stage in the funnel.
Content matrix
Mapping out content by intent and branding helps to visualize conversion potential. I find these extremely useful for prioritizing top-converting content initiatives (i.e., start with ensuring branded, transactional content is delivering the best experience, then move towards more generic, higher-funnel terms).
Overviews
Regardless of how the data is broken down, it’s vital to have a high-level view on the audience’s core attributes, opportunities to improve content, and strategy for closing the gap.
Connecting
Connecting is all about resonating with humans. Connecting is about understanding that customers are human (and we have certain constraints). Our mind is constantly filtering, managing, multitasking, processing, coordinating, organizing, and storing information. It is literally in our mind’s best interest to not remember 99% of the information and sensations that surround us (think of the lights, sounds, tangible objects, people surrounding you, and you’re still able to focus on reading the words on your screen — pretty incredible!).
To become psychologically sticky, we must:
- Get past the mind’s natural filter. A positive aspect of being a pull marketing channel is that individuals are already seeking out information, making it possible to intersect their user journey in a micro-moment.
- From there we must be memorable. The brain tends to hold onto what’s relevant, useful, or interesting. Luckily, the searcher’s interest is already piqued (even if they aren’t consciously aware of why they searched for a particular topic).
This means we have a unique opportunity to “be there” for people. This leads to a very simple, abstract philosophy: a great brand is like a great friend.
We have similar relationship stages, we interweave throughout each other’s lives, and we have the ability to impact happiness. This comes down to the question: Do your online customers use adjectives they would use for a friend to describe your brand?
Action items:
- Is all content either relevant, useful, or interesting?
- Does the content honor your user’s questions?
- Does your brand have a personality that aligns with reality?
- Are you treating users as you would a friend?
- Do your users use friend-like adjectives to describe your brand and/or site?
- Do the brand’s actions align with overarching goals?
- Is your experience trust-inspiring?
- https://?
- Using Limited ads in layout?
- Does the site have proof of claims?
- Does the site use relevant reviews and testimonials?
- Is contact information available and easily findable?
- Is relevant information intuitively available to users?
- Is it as easy to buy/subscribe as it is to return/cancel?
- Is integrity visible throughout the entire conversion process and experience?
- Does site have credible reputation across the web?
Ultimately, being able to strategically, seamlessly create compelling user experiences which make sense to bots is what the SEO cyborg is all about. ☺
tl;dr
- Ensure site = crawlable, renderable, and indexable
- Ensure all signals = clear, aligned
- Answering related, semantically salient questions
- Research keywords, the search landscape, site performance, and develop audience segments
- Use audience segments to map content and prioritize initiatives
- Ensure content is relevant, useful, or interesting
- Treat users as friend, be worthy of their trust
This article is based off of my MozCon talk (with a few slides from the Appendix pulled forward). The full deck is available on Slideshare, and the official videos can be purchased here. Please feel free to reach out with any questions in the comments below or via Twitter @AlexisKSanders.
Sign up for The Moz Top 10, a semimonthly mailer updating you on the top ten hottest pieces of SEO news, tips, and rad links uncovered by the Moz team. Think of it as your exclusive digest of stuff you don't have time to hunt down but want to read!
from Moz Blog https://ift.tt/2ORnTv2
How to use data to power target account selection
Please visit Search Engine Land for the full article.
from Search Engine Land: News & Info About SEO, PPC, SEM, Search Engines & Search Marketing https://ift.tt/2Cy4gCu
Canonical tag vs 301 redirect
from Google SEO News and Discussion WebmasterWorld https://ift.tt/2RCCFVi
Rewriting the Beginner's Guide to SEO, Chapter 5: Technical Optimization
Posted by BritneyMuller
After a short break, we're back to share our working draft of Chapter 5 of the Beginner's Guide to SEO with you! This one was a whopper, and we're really looking forward to your input. Giving beginner SEOs a solid grasp of just what technical optimization for SEO is and why it matters — without overwhelming them or scaring them off the subject — is a tall order indeed. We'd love to hear what you think: did we miss anything you think is important for beginners to know? Leave us your feedback in the comments!
And in case you're curious, check back on our outline, Chapter One, Chapter Two, Chapter Three, and Chapter Four to see what we've covered so far.
Chapter 5: Technical Optimization
Basic technical knowledge will help you optimize your site for search engines and establish credibility with developers.
Now that you’ve crafted valuable content on the foundation of solid keyword research, it’s important to make sure it’s not only readable by humans, but by search engines too!
You don’t need to have a deep technical understanding of these concepts, but it is important to grasp what these technical assets do so that you can speak intelligently about them with developers. Speaking your developers’ language is important because you will likely need them to carry out some of your optimizations. They're unlikely to prioritize your asks if they can’t understand your request or see its importance. When you establish credibility and trust with your devs, you can begin to tear away the red tape that often blocks crucial work from getting done.
Pro tip: SEOs need cross-team support to be effective It’s vital to have a healthy relationship with your developers so that you can successfully tackle SEO challenges from both sides. Don’t wait until a technical issue causes negative SEO ramifications to involve a developer. Instead, join forces for the planning stage with the goal of avoiding the issues altogether. If you don’t, it can cost you in time and money later. |
Beyond cross-team support, understanding technical optimization for SEO is essential if you want to ensure that your web pages are structured for both humans and crawlers. To that end, we’ve divided this chapter into three sections:
- How websites work
- How search engines understand websites
- How users interact with websites
Since the technical structure of a site can have a massive impact on its performance, it’s crucial for everyone to understand these principles. It might also be a good idea to share this part of the guide with your programmers, content writers, and designers so that all parties involved in a site's construction are on the same page.
1. How websites work
If search engine optimization is the process of optimizing a website for search, SEOs need at least a basic understanding of the thing they're optimizing!
Below, we outline the website’s journey from domain name purchase all the way to its fully rendered state in a browser. An important component of the website’s journey is the critical rendering path, which is the process of a browser turning a website’s code into a viewable page.
Knowing this about websites is important for SEOs to understand for a few reasons:
- The steps in this webpage assembly process can affect page load times, and speed is not only important for keeping users on your site, but it’s also one of Google’s ranking factors.
- Google renders certain resources, like JavaScript, on a “second pass.” Google will look at the page without JavaScript first, then a few days to a few weeks later, it will render JavaScript, meaning SEO-critical elements that are added to the page using JavaScript might not get indexed.
Imagine that the website loading process is your commute to work. You get ready at home, gather your things to bring to the office, and then take the fastest route from your home to your work. It would be silly to put on just one of your shoes, take a longer route to work, drop your things off at the office, then immediately return home to get your other shoe, right? That’s sort of what inefficient websites do. This chapter will teach you how to diagnose where your website might be inefficient, what you can do to streamline, and the positive ramifications on your rankings and user experience that can result from that streamlining.
Before a website can be accessed, it needs to be set up!
- Domain name is purchased. Domain names like moz.com are purchased from a domain name registrar such as GoDaddy or HostGator. These registrars are just organizations that manage the reservations of domain names.
- Domain name is linked to IP address. The Internet doesn’t understand names like “moz.com” as website addresses without the help of domain name servers (DNS). The Internet uses a series of numbers called an Internet protocol (IP) address (ex: 127.0.0.1), but we want to use names like moz.com because they’re easier for humans to remember. We need to use a DNS to link those human-readable names with machine-readable numbers.
How a website gets from server to browser
- User requests domain. Now that the name is linked to an IP address via DNS, people can request a website by typing the domain name directly into their browser or by clicking on a link to the website.
- Browser makes requests. That request for a web page prompts the browser to make a DNS lookup request to convert the domain name to its IP address. The browser then makes a request to the server for the code your web page is constructed with, such as HTML, CSS, and JavaScript.
- Server sends resources. Once the server receives the request for the website, it sends the website files to be assembled in the searcher’s browser.
- Browser assembles the web page. The browser has now received the resources from the server, but it still needs to put it all together and render the web page so that the user can see it in their browser. As the browser parses and organizes all the web page’s resources, it’s creating a Document Object Model (DOM). The DOM is what you can see when you right click + “inspect element” on a web page in your Chrome browser (learn how to inspect elements in other browsers).
- Browser makes final requests. The browser will only show a web page after all the page’s necessary code is downloaded, parsed, and executed, so at this point, if the browser needs any additional code in order to show your website, it will make an additional request from your server.
- Website appears in browser. Whew! After all that, your website has now been transformed (rendered) from code to what you see in your browser.
Pro tip: Talk to your developers about async! Something you can bring up with your developers is shortening the critical rendering path by setting scripts to "async" when they’re not needed to render content above the fold, which can make your web pages load faster. Async tells the DOM that it can continue to be assembled while the browser is fetching the scripts needed to display your web page. If the DOM has to pause assembly every time the browser fetches a script (called “render-blocking scripts”), it can substantially slow down your page load. It would be like going out to eat with your friends and having to pause the conversation every time one of you went up to the counter to order, only resuming once they got back. With async, you and your friends can continue to chat even when one of you is ordering. You might also want to bring up other optimizations that devs can implement to shorten the critical rendering path, such as removing unnecessary scripts entirely, like old tracking scripts. |
Now that you know how a website appears in a browser, we’re going to focus on what a website is made of — in other words, the code (programming languages) used to construct those web pages.
The three most common are:
- HTML – What a website says (titles, body content, etc.)
- CSS – How a website looks (color, fonts, etc.)
- JavaScript – How it behaves (interactive, dynamic, etc.)
HTML: What a website says
HTML stands for hypertext markup language, and it serves as the backbone of a website. Elements like headings, paragraphs, lists, and content are all defined in the HTML.
Here’s an example of a webpage, and what its corresponding HTML looks like:
HTML is important for SEOs to know because it’s what lives “under the hood” of any page they create or work on. While your CMS likely doesn’t require you to write your pages in HTML (ex: selecting “hyperlink” will allow you to create a link without you having to type in “a href=”), it is what you’re modifying every time you do something to a web page such as adding content, changing the anchor text of internal links, and so on. Google crawls these HTML elements to determine how relevant your document is to a particular query. In other words, what’s in your HTML plays a huge role in how your web page ranks in Google organic search!
CSS: How a website looks
CSS stands for cascading style sheets, and this is what causes your web pages to take on certain fonts, colors, and layouts. HTML was created to describe content, rather than to style it, so when CSS entered the scene, it was a game-changer. With CSS, web pages could be “beautified” without requiring manual coding of styles into the HTML of every page — a cumbersome process, especially for large sites.
It wasn’t until 2014 that Google’s indexing system began to render web pages more like an actual browser, as opposed to a text-only browser. A black-hat SEO practice that tried to capitalize on Google’s older indexing system was hiding text and links via CSS for the purpose of manipulating search engine rankings. This “hidden text and links” practice is a violation of Google’s quality guidelines.
Components of CSS that SEOs, in particular, should care about:
- Since style directives can live in external stylesheet files (CSS files) instead of your page’s HTML, it makes your page less code-heavy, reducing file transfer size and making load times faster.
- Browsers still have to download resources like your CSS file, so compressing them can make your web pages load faster, and page speed is a ranking factor.
- Having your pages be more content-heavy than code-heavy can lead to better indexing of your site’s content.
- Using CSS to hide links and content can get your website manually penalized and removed from Google’s index.
JavaScript: How a website behaves
In the earlier days of the Internet, web pages were built with HTML. When CSS came along, webpage content had the ability to take on some style. When the programming language JavaScript entered the scene, websites could now not only have structure and style, but they could be dynamic.
JavaScript has opened up a lot of opportunities for non-static web page creation. When someone attempts to access a page that is enhanced with this programming language, that user’s browser will execute the JavaScript against the static HTML that the server returned, resulting in a web page that comes to life with some sort of interactivity.
You’ve definitely seen JavaScript in action — you just may not have known it! That’s because JavaScript can do almost anything to a page. It could create a pop up, for example, or it could request third-party resources like ads to display on your page.
JavaScript can pose some problems for SEO, though, since search engines don’t view JavaScript the same way human visitors do. That’s because of client-side versus server-side rendering. Most JavaScript is executed in a client’s browser. With server-side rendering, on the other hand, the files are executed at the server and the server sends them to the browser in their fully rendered state.
SEO-critical page elements such as text, links, and tags that are loaded on the client’s side with JavaScript, rather than represented in your HTML, are invisible from your page’s code until they are rendered. This means that search engine crawlers won’t see what’s in your JavaScript — at least not initially.
Google says that, as long as you’re not blocking Googlebot from crawling your JavaScript files, they’re generally able to render and understand your web pages just like a browser can, which means that Googlebot should see the same things as a user viewing a site in their browser. However, due to this “second wave of indexing” for client-side JavaScript, Google can miss certain elements that are only available once JavaScript is executed.
There are also some other things that could go wrong during Googlebot’s process of rendering your web pages, which can prevent Google from understanding what’s contained in your JavaScript:
- You’ve blocked Googlebot from JavaScript resources (ex: with robots.txt, like we learned about in Chapter 2)
- Your server can’t handle all the requests to crawl your content
- The JavaScript is too complex or outdated for Googlebot to understand
- JavaScript doesn’t "lazy load" content into the page until after the crawler has finished with the page and moved on.
Needless to say, while JavaScript does open a lot of possibilities for web page creation, it can also have some serious ramifications for your SEO if you’re not careful. Thankfully, there is a way to check whether Google sees the same thing as your visitors. To see a page how Googlebot views your page, use Google Search Console's "Fetch and Render" tool. From your site’s Google Search Console dashboard, select “Crawl” from the left navigation, then “Fetch as Google.”
From this page, enter the URL you want to check (or leave blank if you want to check your homepage) and click the “Fetch and Render” button. You also have the option to test either the desktop or mobile version.
In return, you’ll get a side-by-side view of how Googlebot saw your page versus how a visitor to your website would have seen the page. Below, Google will also show you a list of any resources they may not have been able to get for the URL you entered.
Understanding the way websites work lays a great foundation for what we’ll talk about next, which is technical optimizations to help Google understand the pages on your website better.
2. How search engines understand websites
Search engines have gotten incredibly sophisticated, but they can’t (yet) find and interpret web pages quite like a human can. The following sections outline ways you can better deliver content to search engines.
Help search engines understand your content by structuring it with Schema
Imagine being a search engine crawler scanning down a 10,000-word article about how to bake a cake. How do you identify the author, recipe, ingredients, or steps required to bake a cake? This is where schema (Schema.org) markup comes in. It allows you to spoon-feed search engines more specific classifications for what type of information is on your page.
Schema is a way to label or organize your content so that search engines have a better understanding of what certain elements on your web pages are. This code provides structure to your data, which is why schema is often referred to as “structured data.” The process of structuring your data is often referred to as “markup” because you are marking up your content with organizational code.
JSON-LD is Google’s preferred schema markup (announced in May ‘16), which Bing also supports. To view a full list of the thousands of available schema markups, visit Schema.org or view the Google Developers Introduction to Structured Data for additional information on how to implement structured data. After you implement the structured data that best suits your web pages, you can test your markup with Google’s Structured Data Testing Tool.
In addition to helping bots like Google understand what a particular piece of content is about, schema markup can also enable special features to accompany your pages in the SERPs. These special features are referred to as "rich snippets," and you’ve probably seen them in action. They’re things like:
- Top Stories carousel
- Review stars
- Sitelinks search boxes
- Recipes
Remember, using structured data can help enable a rich snippet to be present, but does not guarantee it. Other types of rich snippets will likely be added in the future as the use of schema markup increases.
Some last words of advice for schema success:
- You can use multiple types of schema markup on a page. However, if you mark up one element, like a product for example, and there are other products listed on the page, you must also mark up those products.
- Don’t mark up content that is not visible to visitors and follow Google’s Quality Guidelines. For example, if you add review structured markup to a page, make sure those reviews are actually visible on that page.
- If you have duplicate pages, Google asks that you mark up each duplicate page with your structured markup, not just the canonical version.
- Provide original and updated (if applicable) content on your structured data pages.
- Structured markup should be an accurate reflection of your page.
- Try to use the most specific type of schema markup for your content.
- Marked-up reviews should not be written by the business. They should be genuine unpaid business reviews from actual customers.
Tell search engines about your preferred pages with canonicalization
When Google crawls the same content on different web pages, it sometimes doesn’t know which page to index in search results. This is why the tag was invented: to help search engines better index the preferred version of content and not all its duplicates.
The rel="canonical" tag allows you to tell search engines where the original, master version of a piece of content is located. You’re essentially saying, "Hey search engine! Don’t index this; index this source page instead." So, if you want to republish a piece of content, whether exactly or slightly modified, but don’t want to risk creating duplicate content, the canonical tag is here to save the day.
Proper canonicalization ensures that every unique piece of content on your website has only one URL. To prevent search engines from indexing multiple versions of a single page, Google recommends having a self-referencing canonical tag on every page on your site. Without a canonical tag telling Google which version of your web page is the preferred one, http://www.example.com could get indexed separately from http://example.com, creating duplicates.
"Avoid duplicate content" is an Internet truism, and for good reason! Google wants to reward sites with unique, valuable content — not content that’s taken from other sources and repeated across multiple pages. Because engines want to provide the best searcher experience, they will rarely show multiple versions of the same content, opting instead to show only the canonicalized version, or if a canonical tag does not exist, whichever version they deem most likely to be the original.
Pro tip: Distinguishing between content filtering & content penalties |
It’s also very common for websites to have multiple duplicate pages due to sort and filter options. For example, on an e-commerce site, you might have what’s called a faceted navigation that allows visitors to narrow down products to find exactly what they’re looking for, such as a “sort by” feature that reorders results on the product category page from lowest to highest price. This could create a URL that looks something like this: example.com/mens-shirts?sort=price_ascending. Add in more sort/filter options like color, size, material, brand, etc. and just think about all the variations of your main product category page this would create!
To learn more about different types of duplicate content, this post by Dr. Pete helps distill the different nuances.
3. How users interact with websites
In Chapter 1, we said that despite SEO standing for search engine optimization, SEO is as much about people as it is about search engines themselves. That’s because search engines exist to serve searchers. This goal helps explain why Google’s algorithm rewards websites that provide the best possible experiences for searchers, and why some websites, despite having qualities like robust backlink profiles, might not perform well in search.
When we understand what makes their web browsing experience optimal, we can create those experiences for maximum search performance.
Ensuring a positive experience for your mobile visitors
Being that well over half of all web traffic today comes from mobile, it’s safe to say that your website should be accessible and easy to navigate for mobile visitors. In April 2015, Google rolled out an update to its algorithm that would promote mobile-friendly pages over non-mobile-friendly pages. So how can you ensure that your website is mobile friendly? Although there are three main ways to configure your website for mobile, Google recommends responsive web design.
Responsive design
Responsive websites are designed to fit the screen of whatever type of device your visitors are using. You can use CSS to make the web page "respond" to the device size. This is ideal because it prevents visitors from having to double-tap or pinch-and-zoom in order to view the content on your pages. Not sure if your web pages are mobile friendly? You can use Google’s mobile-friendly test to check!
AMP
AMP stands for Accelerated Mobile Pages, and it is used to deliver content to mobile visitors at speeds much greater than with non-AMP delivery. AMP is able to deliver content so fast because it delivers content from its cache servers (not the original site) and uses a special AMP version of HTML and JavaScript. Learn more about AMP.
Mobile-first indexing
As of 2018, Google started switching websites over to mobile-first indexing. That change sparked some confusion between mobile-friendliness and mobile-first, so it’s helpful to disambiguate. With mobile-first indexing, Google crawls and indexes the mobile version of your web pages. Making your website compatible to mobile screens is good for users and your performance in search, but mobile-first indexing happens independently of mobile-friendliness.
This has raised some concerns for websites that lack parity between mobile and desktop versions, such as showing different content, navigation, links, etc. on their mobile view. A mobile site with different links, for example, will alter the way in which Googlebot (mobile) crawls your site and sends link equity to your other pages.
Breaking up long content for easier digestion
When sites have very long pages, they have the option of breaking them up into multiple parts of a whole. This is called pagination and it’s similar to pages in a book. In order to avoid giving the visitor too much all at once, you can break up your single page into multiple parts. This can be great for visitors, especially on e-commerce sites where there are a lot of product results in a category, but there are some steps you should take to help Google understand the relationship between your paginated pages. It’s called rel="next" and rel="prev."
You can read more about pagination in Google’s official documentation, but the main takeaways are that:
- The first page in a sequence should only have rel="next" markup
- The last page in a sequence should only have rel="prev" markup
- Pages that have both a preceding and following page should have both rel="next" and rel="prev"
- Since each page in the sequence is unique, don’t canonicalize them to the first page in the sequence. Only use a canonical tag to point to a “view all” version of your content, if you have one.
- When Google sees a paginated sequence, it will typically consolidate the pages’ linking properties and send searchers to the first page
Pro tip: rel="next/prev" should still have anchor text and live within an <a> link |
Improving page speed to mitigate visitor frustration
Google wants to serve content that loads lightning-fast for searchers. We’ve come to expect fast-loading results, and when we don’t get them, we’ll quickly bounce back to the SERP in search of a better, faster page. This is why page speed is a crucial aspect of on-site SEO. We can improve the speed of our web pages by taking advantage of tools like the ones we’ve mentioned below. Click on the links to learn more about each.
- Google's PageSpeed Insights tool & best practices documentation
- GTMetrix
- Google's Mobile Website Speed & Performance Tester
- Google Lighthouse
Images are one of the main culprits of slow pages!
As discussed in Chapter 4, images are one of the number-one reasons for slow-loading web pages! In addition to image compression, optimizing image alt text, choosing the right image format, and submitting image sitemaps, there are other technical ways to optimize the speed and way in which images are shown to your users. Some primary ways to improve image delivery are as follows:
SRCSET: How to deliver the best image size for each device
The SRCSET attribute allows you to have multiple versions of your image and then specify which version should be used in different situations. This piece of code is added to the <img> tag (where your image is located in the HTML) to provide unique images for specific-sized devices.
This is like the concept of responsive design that we discussed earlier, except for images!
This doesn’t just speed up your image load time, it’s also a unique way to enhance your on-page user experience by providing different and optimal images to different device types.
Pro tip: There are more than just three image size versions! |
Show visitors image loading is in progress with lazy loading
Lazy loading occurs when you go to a webpage and, instead of seeing a blank white space for where an image will be, a blurry lightweight version of the image or a colored box in its place appears while the surrounding text loads. After a few seconds, the image clearly loads in full resolution. The popular blogging platform Medium does this really well.
The low resolution version is initially loaded, and then the full high resolution version. This also helps to optimize your critical rendering path! So while all of your other page resources are being downloaded, you’re showing a low-resolution teaser image that helps tell users that things are happening/being loaded. For more information on how you should lazy load your images, check out Google’s Lazy Loading Guidance.
Improve speed by condensing and bundling your files
Page speed audits will often make recommendations such as “minify resource,” but what does that actually mean? Minification condenses a code file by removing things like line breaks and spaces, as well as abbreviating code variable names wherever possible.
“Bundling” is another common term you’ll hear in reference to improving page speed. The process of bundling combines a bunch of the same coding language files into one single file. For example, a bunch of JavaScript files could be put into one larger file to reduce the amount of JavaScript files for a browser.
By both minifying and bundling the files needed to construct your web page, you’ll speed up your website and reduce the number of your HTTP (file) requests.
Improving the experience for international audiences
Websites that target audiences from multiple countries should familiarize themselves with international SEO best practices in order to serve up the most relevant experiences. Without these optimizations, international visitors might have difficulty finding the version of your site that caters to them.
There are two main ways a website can be internationalized:
- Language
Sites that target speakers of multiple languages are considered multilingual websites. These sites should add something called an hreflang tag to show Google that your page has copy for another language. Learn more about hreflang. - Country
Sites that target audiences in multiple countries are called multi-regional websites and they should choose a URL structure that makes it easy to target their domain or pages to specific countries. This can include the use of a country code top level domain (ccTLD) such as “.ca” for Canada, or a generic top-level domain (gTLD) with a country-specific subfolder such as “example.com/ca” for Canada. Learn more about locale-specific URLs.
You’ve researched, you’ve written, and you’ve optimized your website for search engines and user experience. The next piece of the SEO puzzle is a big one: establishing authority so that your pages will rank highly in search results.
Sign up for The Moz Top 10, a semimonthly mailer updating you on the top ten hottest pieces of SEO news, tips, and rad links uncovered by the Moz team. Think of it as your exclusive digest of stuff you don't have time to hunt down but want to read!
from Moz Blog https://ift.tt/2PoeXug
SearchCap: Google+ closing down, Bing Ads Editor in-marketing audiences & Google My Business analytics
Please visit Search Engine Land for the full article.
from Search Engine Land: News & Info About SEO, PPC, SEM, Search Engines & Search Marketing https://ift.tt/2C5QJkT