Wednesday, August 10, 2016

Why agencies need to learn from the past and evolve

SEO agencies need to place greater emphasis on user experience. But how can agencies make this transition? Columnist Ian Bowden believes we can learn from the recent shift to content marketing. The post Why agencies need to learn from the past and evolve appeared first on Search Engine Land.

Please visit Search Engine Land for the full article.


from Search Engine Land: News & Info About SEO, PPC, SEM, Search Engines & Search Marketing http://ift.tt/2aLrdTH

Someday You'll Thank Me: An Essential Task List for Junior SEOs to Master

Posted by DaveSottimano

Let’s face it: SEO isn’t as black & white as most marketing channels. In my opinion, to become a true professional requires a broad skill set. It’s not that a professional SEO needs to know the answer for everything; rather, it’s more important to have the skills to be able to find the answer.

I’m really pleased with the results of various bits of training I’ve put together for successful juniors over the years, so I think it’s time to share.

This is a Junior SEO task list designed to help new starters in the field get the right skills by doing hands-on jobs, and possibly to help find a specialism in SEO or digital marketing.

How long should this take? Let’s ballpark at 60–90 days.

Before anything, here’s some prerequisite reading:

Project 1 – Technical Fundamentals:

Master the lingo and have a decent idea of how the Internet works before they start having conversations with developers or contributing online. Have the trainee answer the following questions. To demonstrate that they understand, have them answer the questions using analogies. Take inspiration from this post.

Must be able to answer the following in detail:

  • What is HTTP / HTTPS / HTTP2? Explain connections and how they flow.
  • Do root domains have trailing slashes?
  • What are the fundamental parts of a URL?
  • What is "www," anyway?
  • What are generic ccTLDs?
  • Describe the transaction between client and server?
  • What do we mean when we say "client side" and "server side?"
  • Name 3 common servers. Explain each one.
  • How does DNS work?
  • What are ports?
  • How do I see/find my public IP address?
  • What is a proxy server?
  • What is a reverse proxy server?
  • How do CDNs work?
  • What is a VPN?
  • What are server response codes and how do they relate to SEO?
  • What is the difference between URL rewriting and redirecting?
  • What is MVC?
  • What is a development sprint / scrum?
  • Describe a development deployment workflow.
  • What are the core functions that power Google search?
  • What is PageRank?
  • What is toolbar PageRank?
  • What is the reasonable surfer model?
  • What is the random surfer model?
  • What is Mozrank, Domain Authority, and Page Authority — and how are they calculated?
  • Name 3 Google search parameters and explain what they do (hint: gl= country).
  • What advanced operator search query will return: all URLs with https, with “cat” in the title, not including www subdomains, and only PDFs?
  • Describe filtering in search results, and which parameter can be appended to the search URL to omit filtering.
  • How can I Google search by a specific date?
  • If we say something is "indexed," what does that mean?
  • If we say something is "canonicalized," what does that mean?
  • If we say something is "indexable," what does that mean?
  • If we say something is "non indexable," what does that mean?
  • If we say something is "crawlable," what does that mean?
  • If we say something is "not crawlable," what does that mean?
  • If we say something is "blocked," what does that mean?
  • Give examples of "parameters" in the wild, and manipulate any parameter on any website to show different content.
  • How should you check rankings for a particular keyword in a particular country?
  • Where are some places online you can speak to Googlers for advice?
  • What are the following: rel canonical, noindex, nofollow, hreflang, mobile alternate?(Explain each directive and its behavior in detail and state any variations in implementation)

Explaining metrics from popular search tools

  • Explain SearchMetrics search visibility — how is this calculated? Why would you see declines in SM graphs but not in actual organic traffic?
  • Explain Google Trends Index — how is this calculated?
  • Explain Google Keyword Planner search volume estimates & competition metric — is search volume accurate? Is the competition metric useful for organic?
  • Explain SEMrush.com’s organic traffic graphs — Why might you see declines in SEMrush graphs, but not in actual organic traffic?

Link architecture

  • By hand, map out the world’s first website — http://ift.tt/izkeKL (we want to see the full link architecture here in a way that’s digestable)
  • Explain its efficiency from an SEO perspective — are this website’s pages linked efficiently? Why or why not?

Project 2 – Creating a (minimum) 10-page website

If the trainee doesn’t understand what something is, make sure that they try and figure it out themselves before coming for help. Building a website by hand is absolutely painful, and they might want to throw their computer out the window or just install Wordpress — no, no, no. There are so many things to learn by doing it the hard way, which is the only way.

  1. Grab a domain name and go setup shared hosting. A LAMP stack with Cpanel and log file access (example: hostgator) is probably the easiest.
  2. Set up Filezilla with your host’s FTP details
  3. Set up a text editor (example: Notepad++, Sublime) and connect via FTP for quick deploy
  4. Create a 10-page flat site (NO CMS. That means no Wordpress!)
    • Within the site, it must contain at least one instance of each the following:
      • <div>,<table>,<a>,<strong>, <em>, <iframe>, <button>, <noscript>, <form>, <option>, <button>, <img>, <h1>, <h2>, <h3>, <p>, <span>
      • Inline CSS that shows/hides a div on hover
      • Unique titles, meta descriptions, and H1s on every page
      • Must contain at least 3 folders
      • Must have at least 5 pages that are targeted to a different country
      • Recreate the navigation menu from the bbc.co.uk homepage (or your choice) using an external CSS stylesheet
      • Do the exact same as the previous, but make the Javascript external, and the function must execute with a button click.
      • Must receive 1,000 organic sessions in one month
      • Must contain Google Analytics tracking, Google search console setup, Bing webmaster tools, and Yandex webmaster tools setup
      • Create a custom 404 page
      • Create a 301, 302, and 307 redirect
      • Create a canonical to an exact duplicate, and another to a unique page — watch behavior

The site must contain at least one instance of each of the following, and every page which contains a directive (accompanying pages affected by directives as well) must be tracked through a rank tracker:

  • Rel canonical
  • Noindex
  • Noindex, follow
  • Mobile alternate (one page must be mobile-friendly)
  • Noarchive
  • Noimageindex
  • Meta refresh

Set up rank tracking

The trainee can use whatever tracking tool they like; http://ift.tt/1nfnpgT is free for 100 keywords. The purpose of the rank tracking is to measure the effects of directives implemented, redirects, and general fluctuation.

Create the following XML sitemaps:

  • Write the following XML sitemaps by hand for at least 5 URLs: mobile, desktop, Android App, and create one desktop XML sitemap with hreflang annotations
  • Figure out how to ping Google & Bing with your sitemap URL

Writing robots.txt

  • Design a robots.txt that has specific blocking conditions for regular Googlebot, Bingbot, all user agents. They must be independent and not interfere with each other.
  • Write a rule that disallows everything, but allows at least 1 folder.
  • Test the robots.txt file through the Search Console robots.txt tester.

Crawl the site and fix errors (Use Screaming Frog)

  • Have the trainee read: http://ift.tt/2beFcCV
  • Ensure the trainee has a full, registered version of the software
  • Crawl the site and have them correct any errors on the site

Project 3 – PR, Sales, Promotion and Community Involvement

These tasks can be done on an independent website or directly for a client; it depends on your organizational requirements. This is the part of the training where the trainee learns how to negotiate, sell, listen, promote, and create exposure for themselves.

Sales & negotiation

  • Close one guest post deal (i.e. have your content placed on an external website). Bonus if this is done via a phone call.
  • Create & close one syndication deal (i.e. have your content placed and rel canonical’d back to your content). Bonus if this is done via a phone call.
  • Close one advertising deal (this could be as simple as negotiating a banner placement, and as hard as completely managing the development of the ad plus tracking)
  • Sit in on 5 sales calls (depending on your business, this may need to be adjusted — it could be customer service calls)
  • Sit in on 5 sales meetings (again, adjust this for your business)

PR

  1. Create a story, write a press release, get the story covered by any publication (bonus if there’s a link back to your original release, or a rel canonical)
  2. Use a PR wire to syndicate, or find your own syndication partner - this

Community involvement

  • Sign up for a Moz account and answer at least 15 questions in the forum
  • Sign up for a Quora account and answer at least 5 questions
  • Write 3 blog posts and get them featured on an industry website
  • Speak at an event, no matter how small; must be at least 10 minutes long

YouTube

  • Create a screencast tutorial, upload it to YouTube, get 1,000 views (they will also need to optimize description, tags, etc.)
  • Here’s an example: https://www.youtube.com/watch?v=EXhmF9rjqP4 (that was my first try at this, years ago which you can use as inspiration)

Facebook & Twitter Paid Ads

  • On both networks, pay to get 100 visits from an ad. These campaigns must be tracked properly in an analytics platform, not only in FB and Twitter analytics!

Adwords

  • Create 1 campaign (custom ad) with the goal of finding real number of impressions versus estimated search volume from Keyword Planner.
  • Bonus: Drive 100 visits with an ad. Remember to keep the costs low — this is just training!

Project 4 – Data Manipulation & Analytics

Spreadsheets are to SEOs as fire trucks are to firefighters. Trainees need to be proficient in Excel or Google Docs right from the start. These tasks are useful for grasping data manipulation techniques in spreadsheets, Google Analytics, and some more advanced subjects, like scraping and machine learning classification.

Excel skills

Must be able to fill in required arguments for the following formulas in under 6 seconds:

  • Index + match
  • VLOOKUP (we should really be teaching people to index-match, because it’s more versatile and is quicker when dealing with larger datasets)
  • COUNTIF, COUNTIFS (2 conditions)
  • SUMIF, SUMIFS (2 conditions)
  • IF & AND statement in the same formula
  • Max, Min, Sum, Avg, Correl, Percentile, Len, Mid, Left, Right, Search, & Offset are also required formulas.

Also:

  • Conditional formatting based on a formula
  • Create a meaningful pivot table + chart
  • Record a macro that will actually be used
  • Ability to copy, paste, move, transpose, and copy an entire row and paste in new sheet — all while never touching the mouse.

Google Analytics

  • Install Google Analytics (Universal Analytics), and Google Tag Manager at least once — ensure that the bare minimum tracking works properly.
  • Pass the GAIQ Exam with at least 90%
  • Create a non-interaction event
  • Create a destination goal
  • Create a macro that finds a value in the DOM and only fires on a specific page
  • Create a custom segment, segmenting session by Google organic, mobile device only, Android operating system, US traffic only — then share the segment with another account.
  • Create an alert for increasing 404 page errors (comparison by day, threshold is 10% change)
  • Install the Google Tag Assistant for Chrome and learn to record and decipher requests for debugging
  • Use the Google Analytics Query explorer to pull from any profile — you must pull at least 3 metrics, 1 dimension, sort by 1 metric, and have 1 filter.
  • Create one Google Content Experiment — this involves creating two pages and A/B testing to find the winner. They’ll need to have some sort of call to action; it could be as simple as a form or a targeted click. Either way, traffic doesn’t determine the winner here; it’s conversion rate.

Google Search Console

  • Trainee must go through every report (I really mean every report), and double-check the accuracy of each using external SEO tools (except crawl activity reports). The point here is to find out why there are discrepancies between what SEO tools find and what Google Search Console reports.
  • Fetch and render 5 different pages from 5 domains, include at least 2 mobile pages
  • Fetch (only fetch) 3 more pages; 1 must be mobile
  • Submit an XML sitemap
  • Create https, http, www, and non-www versions of their site they built in the previous project and identify discrepancies.
  • Answer: Why don’t clicks from search analytics add up compared to Google Analytics?
  • Answer: How are impressions from search analytics measured?

Link auditing

  • Download link reports for 1 website. Use Google Search Console, Majestic, Ahrefs, and Moz, and combine them all in one Excel file (or Google Doc sheet). If the total number of rows between all 4 exports are over Excel’s limit, the trainee will need to figure out how to handle large files on their own (hint: SQL or other database).
  • Must combine all links, de-duplicate, have columns for all anchor texts, and check if links are still alive (hint: the trainee can use Screaming Frog to check live links, or URL Profiler)

Explore machine learning

Scrape something

  • Use at least 3 different methods to extract information from any webpage (hint: import.io, importxml)

Log file analysis

  • Let the trainee use whatever software they want to parse the log files; just remember to explain how different servers will have different fields.
  • Grab a copy of any web server access log files that contain at least the following fields: user-agent, timestamp, URI, IP, Method, Referrer (ensure that CDNs or other intermediary transactions are not rewriting the IP addresses).
  • Trainee must be able to do the following:
    • Find Googlebot requests; double-check by reverse DNS that it’s actually Googlebot
    • Find a 4xx error encountered by Googlebot, then find the referrer for that 4xx error by looking at other user agent requests to the same 4xx error
    • Create a pivot table with all the URLs requested and the amount of times they were requested by Googlebot

Keyword Planner

The candidate must be able to do the following:

  • Find YoY search volume for any given term
  • Find keyword limits, both in the interface and by uploading a CSV
  • Find the mobile trends graph for a set of keywords
  • Use negative keywords
  • Find breakdown by device

Google Chrome Development tools

The candidate must be able to do the following:

  • Turn off Javascript
  • Manipulate elements of the page (As a fun exercise, get them to change a news article to a completely new story)
  • Find every request Chrome makes when visiting a webpage
  • Download the HAR file
  • Run a speed audit & security audit directly from the development tool interface
  • Change their user agent to Googlebot
  • Emulate an Apple iPhone 5
  • Add a CSS attribute (or change one)
  • Add a breakpoint
  • Use the shortcut key to bring up development tools

Project 5 – Miscellaneous / Fun Stuff

These projects are designed to broaden their skills, as well as as prepare the trainee for the future and introduce them to important concepts.

Use a proxy and a VPN

  • As long as they are able to connect to a proxy and a VPN in any application, this is fine — ensure that they understand how to verify their new IP.

Find a development team, and observe the development cycle

  • Have the trainees be present during a scrum/sprint kickoff, and a release.
  • Have the trainees help write development tickets and prioritize accordingly.

Have them spend a day helping other employees with different jobs

  • Have them spend a day with the PR, analytics folks, devs... everyone. The goal should be to understand what it’s like to live a day in their shoes, and assist them throughout the entire day.

Get a website THEY OWN penalized. Heck, make it two!

  • Now that the trainee has built a website by hand, feel free to get them to put up another couple of websites and get some traffic pouring in.
  • Then, start searching for nasty links and other deceptive SEO tactics that are against the Webmaster Guidelines and get that website penalized. Hint: Head to fiverr.com for some services.
  • Bonus: Try to get the penalty reversed. Heh, good luck :)

API skills

  • Request data from 2 different APIs using at least 2 different technologies (either a programming language or software — I would suggest the SEMrush APIand Alchemy Language API). Hints: They can use Postman, Google Docs, Excel, command line, or any programming language.
  • Google APIs are also fantastic, and there are lots of free services in the Google Cloud Console.

Learn concepts of programming

Write 2 functions in 2 different programming languages — these need to be functions that do something useful (i.e. “hello world” is not useful).

Ideas:

  • A Javascript bookmark that extracts link metrics from Majestic or Moz for the given page
  • A simple application that extracts title, H1, and all links from a given URL
  • A simple application that emails you if a change has been detected on a webpage
  • Pull word count from 100 pages in less than 10 seconds

If I were to pick which technology, it would be Javascript and Python. Javascript (Node, Express, React, Angular, Ember, etc.) because I believe things are moving this way, i.e. 1 language for both front and back end. Python because of its rich data science & machine learning libraries, which may become a core part of SEO tasks in the future.

Do an introductory course on computer science / build a search engine

I strongly recommend anyone in SEO to build their own search engine — and no, I’m not crazy, this isn’t crazy, it’s just hard. There are two ways to do this, but I’d recommend both.

  • Complete intro to Computer Science (you build a search engine in Python). This is a fantastic course; I strongly recommend it even if the junior already has a CS degree.
  • Sign up to https://opensolr.com/, crawl a small website, and build your own search engine. You’ll go through a lot of pain to configure what you want, but you’ll learn all about Apache Solr and how a popular search technology works.

Super Evil Genius Bonus Training

Get them to pass http://oap.ninja/, built by the infamous Dean Cruddace. Warning, this is evil — I’ve seen seasoned SEOs give up after just hours into it.

These days, SEO job requirements demand a lot from candidates.

Employers are asking for a wider array of skills that range from development to design as standard, not "preferred."

Have a look around at current SEO job listings. You might be surprised just how much we’re expected to know these days:

  • Strong in Google Analytics/Omniture
  • Assist in the development of presentations to clients
  • Advanced proficiency with MS Excel, SQL
  • Advanced writing, grammar, spelling, editing, and English skills with a creative flair
  • Creating press releases and distribution
  • Proficiency in design software, Photoshop and Illustrator preferred
  • Develop and implement architectural, technical, and content recommendations
  • Conduct keyword research including industry trends and competitive analysis
  • Experience with WordPress and/or Magento (preferred)
  • Experience creating content for links and outreach
  • Experience in building up social media profiles and executing a social media strategy
  • Ability to program in HTML/CSS, VB/VBA, C++, PHP, and/or Python are a plus
  • A/B and Multivariate testing
  • Knowledge of project management software such as Basecamp, MS Project, Visio, Salesforce, etc
  • Basic knowledge of PHP, HTML, XML, CSS, JavaScript
  • Develop + analyze weekly and monthly reports across multiple clients

The list goes on and on, but you get the point. We’re expected to be developers, designers, PR specialists, salespeople, CRO, and social managers. This is why I believe we need to expose juniors to a wide set of tasks and help them develop a broad skill set.

“I’m a Junior SEO and my boss is making me do this training now, I hate you Dave!”

You might hate me now, but when you’re making a lot more money you might change your mind (you might even want to cuddle).

Plus, I’m putting you through hell so that….

  • You don’t lose credibility in front of developers (hint: these are the people who will have to implement your consulting). By using the correct terminology, and by doing parts of the work, you’ll be able to empathize and give better advice.
  • You don’t limit yourself to specific projects/tasks because of lack of knowledge/experience in other specialisms within SEO.
  • You will become a well-rounded marketer, able to take on whatever Google’s Algorithm of Wonder throws at you or jump into other disciplines within digital marketing with a solid foundation.

Feel free to ping me on Twitter (@dsottimano) or you can catch me hanging out with the DMG crew.


Sign up for The Moz Top 10, a semimonthly mailer updating you on the top ten hottest pieces of SEO news, tips, and rad links uncovered by the Moz team. Think of it as your exclusive digest of stuff you don't have time to hunt down but want to read!



from Moz Blog http://ift.tt/2azNBBd

Yelp, TripAdvisor: Google’s mobile “best-of lists” hide our content

Google is adding content for users on the go but some local search rivals see the move intentionally marginalizing their content. The post Yelp, TripAdvisor: Google’s mobile “best-of lists” hide our content appeared first on Search Engine Land.

Please visit Search Engine Land for the full article.


from Search Engine Land: News & Info About SEO, PPC, SEM, Search Engines & Search Marketing http://ift.tt/2bhcA9u

Google remarketing lists for search ads make their way to Google search partners

RLSAs are headed outside of Google websites and into the wide world of Google search partners. The post Google remarketing lists for search ads make their way to Google search partners appeared first on Search Engine Land.

Please visit Search Engine Land for the full article.


from Search Engine Land: News & Info About SEO, PPC, SEM, Search Engines & Search Marketing http://ift.tt/2aJPmtI

SearchCap: Google Now, My Business insights, AMP notifications & Wifi Maps

Below is what happened in search today, as reported on Search Engine Land and from other places across the web. The post SearchCap: Google Now, My Business insights, AMP notifications & Wifi Maps appeared first on Search Engine Land.

Please visit Search Engine Land for the full article.


from Search Engine Land: News & Info About SEO, PPC, SEM, Search Engines & Search Marketing http://ift.tt/2aXy0ta

Tuesday, August 9, 2016

Giving Duplicate Listing Management the Upgrade it Deserves

Posted by George-Freitag

Duplicate listings have been a plague to local search marketers since local search was a thing. When Moz Local first introduced duplicate closure in the fall of 2014, the goal was to address the horribly time-consuming task of finding and closing all those duplicate listings causing problems in Google, Bing, and various mapping platforms. Though we’ve consistently been making improvements to the tool’s performance (we’ll get into this later), the dashboard itself has remained largely unchanged.

Not anymore. Today, we’re proud to announce our brand new duplicate management dashboard for Moz Local:

Here’s a rundown of the features you can look for in the Moz Local upgrade:

  1. New Duplicates Dashboard providing full visibility and transparency of duplicate listings at each stage of the workflow — open, reviewed, and closed — for all of your listings or any subset
  2. Enhanced duplicates workflow making detecting, reviewing, and closing duplicate listings in Moz Local even easier through advanced filters
  3. Enhanced duplicate management for faster and more accurate duplicate listing detection, submission, and tracking across all of Moz Local’s partner networks

This duplicate management update represents a new standard in the industry and will help our users be more productive and efficient than ever.

A bit of context

Eliminating duplicates and near-duplicates on major data sources and directories has always been one of the most effective ways to increase your presence in the local pack. It’s a key part of citation consistency, which was rated as the second most important tactic for getting into local pack results according to the 2015 local ranking factors survey. On top of that, in last May’s Mozinar on local search, Andrew Shotland of Local SEO Guide mentioned that he saw a 23% increase in presence in the local pack just by addressing duplicates.

So we know that seeking and destroying duplicates works. The problem is that doing it manually just takes for-e-ver. Anyone who works in local search knows the pain and monotony of combing through Google for variations of a business, then spending more time finding the contact form needed to actually request a closure.

How duplicates cause problems for search engines

Our duplicate listing feature has always focused on easily identifying potential duplicates and presenting them to marketers in a way that allows them to quickly take action. In the case of the aggregators (like Infogroup and Localeze) and direct partner sites (like Foursquare and Insider Pages), this takes the form of single-click closure requests that are quickly reviewed and sent directly to the source.

For sites that aren’t part of our direct network or don’t accept closure requests from anyone, like Facebook, we still do our best to point our users in the right direction so they can close the listing manually. Originally, the dashboard took the form of a long list where marketers could scroll down and take action, as needed.

Though this worked great for many of our users, it quickly became problematic for large brands and agencies. Based on data collected from the thousands of brands and locations we track, we know that the average enterprise client can have around 3,500 duplicate listings and, in some cases, that number can be as high as 100,000 duplicates. Even though we estimate our tool can reduce the time spent managing duplicates by around 75%, when you have literally thousands of duplicates to parse through, a single to-do list quickly becomes impractical.

1. New dashboard for full transparency

The first opportunity we saw was to provide you with a bit more transparency into our closure process. Though we always provided some insight related to where we were in the closure process, there was no way to view this at an aggregated level and no way to see how many duplicates had been closed so you could track your progress.

So we fixed that.

Now all Moz Local customers can easily see how many duplicates are still marked as "open," how many are being reviewed, and how many listings have been successfully closed. If you’re an agency or consultant, this can be especially useful to demonstrate progress made in identifying and closing duplicates for your clients. If you’re a brand, this can be a great way to build a business case for additional resources or show the value of your local strategy.

We also saw another opportunity to improve transparency by further breaking down the reporting by the type of data partner. Moz Local has always been very deliberate in surfacing the relationship we have with our partners. Because of this, we wanted to add another layer of insight based on the nature of the partnership.

Verification Partners include Google and Facebook, since they're sources we use to verify our own data. Though we can’t close duplicates directly at this point, they're so influential we felt it was imperative to include the ability to identify duplicates on these platforms and guide you as far as possible through the closure process.

Direct Partners are data sources that we have a direct relationship with and submit business listings instantly through our distribution service. For all major aggregators and most of our direct partner directories, you can use our single-click duplicate closure, meaning that all you have to is click “Close” and we’ll make sure it’s removed completely from their database, forever.

Lastly, we have our Indirect Partners. These are sources that receive all of our listing data via our direct partners, but we do not submit to directly. Though we can’t close listings on these sources automatically, we can still detect duplicates and send you directly to their closure form to help you request the closure.

2. Improve workflow through filters

The second opportunity was to address the long list-view that our users used to identify, evaluate, and take action related to duplicates we discovered. With so many of our clients having hundreds or thousands of listings to manage, it quickly became apparent that we needed some advanced sorting to help them out with their workflow.

So we added that, as well.

duplicate-listings-filter-feature.png

Now, if you only want to view the listings that need action, you can just click “Open,” then scroll down and choose to close or ignore any of duplicates in that view. If you then want to see how many duplicates have already been closed and removed from the data partner, you can just click that checkbox. If you want to only see the open duplicate listings for a certain partner, like Foursquare, that’s an option as well.

Further, just like everything else in the Moz Local dashboard and Search Insights, reporting strictly follows any filters and labels from the search bar. This can be especially useful if you’re an agency that wants to narrow your view to a specific client, or a brand that wants to only view reporting for a single marketing region.

For example, if you only want to see closed duplicates from Infogroup located in Texas that are part of the campaign “hanna-barbera” well, there you go.

All data in any filtered view is easily exportable via CSV so you can repurpose it for your own reporting or research.

Lastly, all of these reports are retroactive, meaning any duplicates you’ve requested closure or closed in the past will show up in the new duplicates dashboard and be available for advanced sorting and reporting.

3. Enhanced duplicate management

The new interface and reporting features aren’t the only things we’ve improved. Over the last year, our developers have been spending countless hours fine-tuning the duplicate closure process and improving relationships with our data partners.

Early on, the Moz Local team decided that the product should focus on the data sources that have the greatest impact for local businesses, regardless of their relationship with us, directly. As a result, we built the widest and most complex set of partnerships with aggregators, direct and indirect partners, and business directories in the industry. This update not only launches a new dashboard but also marks the kickoff for some huge improvements to our back-end.

Faster closure processing

The challenge that comes with working with a network as diverse as ours is that each of our partners handle duplicate listings in completely different ways. The Moz Local team has always had resources devoted specifically to work with our partners to improve our data submission and listing management processes. For duplicates, however, this meant we needed to help some of our partners enhance their own APIs to accept closure requests or, in some cases, create the API all together!

As part of this update, our development team has implemented new instrumentation and alerts to better identify submission errors sent to our partners, speed up the closure process, and quickly re-submit any closure requests that were not processed correctly.

Shorter review cycles

Additionally, we’ve shortened our internal review cycle for closure requests. In order ensure the quality of duplicate closures and to be sure our “alternates” feature isn’t being used maliciously, we manually review a percentage of closure requests. Through a variety of processes, we are now able to programmatically approve more closures, allowing for faster manual reviews of all other closure requests. As a result, we are now able to automatically approve around 44% of all closure requests instantly.

The future

The most exciting thing about this update is that it’s only the beginning. Over the next few months expect to see further integration with our data partners, discovery and progress notifications, increased closure efficiency, and more.

We hope you find our new duplicates dashboard useful and, most importantly, we hope it makes your lives a little bit easier.


Get Started with Moz Local!


Sign up for The Moz Top 10, a semimonthly mailer updating you on the top ten hottest pieces of SEO news, tips, and rad links uncovered by the Moz team. Think of it as your exclusive digest of stuff you don't have time to hunt down but want to read!



from Moz Blog http://ift.tt/2bcJnAr

Google My Business Insights updates analytics while dropping Google+ source data

Google My Business updates their analytics to show business owners the source of their views and how they found the listing. The post Google My Business Insights updates analytics while dropping Google+ source data appeared first on Search Engine Land.

Please visit Search Engine Land for the full article.


from Search Engine Land: News & Info About SEO, PPC, SEM, Search Engines & Search Marketing http://ift.tt/2bh075v