Candour

Episode 30: Googlebot user-agents, internal nofollow links, listener Q&A

Play this episode:

Or get it on:

What's in this episode?

Mark Williams-Cook and Rob Lewis will be talking about:

Googlebot user-agent change Exploring how an update to Google's user-agent string might cause websites some issues.

Internal nofollow links: Clarifying best practise on crawl management with internal nofollow links.

Listener Q&A: We're answering your questions about PPC for franchises and on-page HTML sitemaps.

Show note links:

Episode 9 covering Googlebot going 'Evergreen': https://withcandour.co.uk/blog/episode-9-google-i-o-evergreen-googlebot-and-howto-faq-structured-data

Google's announcement on user-agent string change: https://webmasters.googleblog.com/2019/10/updating-user-agent-of-googlebot.html

Episode 27 covering new rel=sponsored and rel=ugc: https://withcandour.co.uk/blog/episode-27-nofollow-google-quality-rater-guidelines-and-ecpc-changes

Change of Address Tool GSC announcement tweet: [https://twitter.com/googlewmc/status/1179418493999091712](https://twitter.com/googlewmc/status/1179418493999091712)

Transcription

MC: Welcome to episode 30 of this Search with Candour podcast! Recorded on Friday the 4th of October 2019. My name is Mark Williams-Cook and again this week, I am very happy to be joined by Rob Lewis.

RL: Hello.

MC: This episode we are going to be covering the Google user agent update; what this is and how it might break things and how to check if you're affected. We're gonna do a follow-up on Google's change of the nofollow directive to a hint, in regard to internal links and we have actually got some time to do some of Q&A today, so we'll be answering a couple of your questions that were submitted to a podcast.

We've had kind of a bitty week this last week in search news, there's been a few updates, nothing major, like obviously the core algorithm update that's come in. But there's been a few interesting bits I want to talk about as well as our main subjects. So, the first one that is yet another update to Google search console, which is brilliant so they're slowly moving everything forward on that and on the 2nd of October Google announced that they have actually ported over their change of address tool into Google search console and they've actually published a really nice guide to when to use it and how to use it.

So for instance, you should only be using the change of address tool if you're moving actual domain name or if you're moving subdomains. You shouldn't be using it if for instance, you were just moving a site from HTTP to HTTPS, so it's basically if it's moving almost to another property within search console. So, that's really good because quite a few people, including myself, were a little bit worried that that hadn't come over into the new version of search console and they weren't particularly clear about when that might happen, but it has happened along with the other day updates we covered in the last few episodes, like the extra the faster data refresh time so that's really really great news. On the September call-up date, so this just occurred to me as I mentioned it, I've been speaking to people again from SISTRIX and looking at their analysis so far.

One thing they noted was that it's actually been, this time around really difficult to identify who the losers of this update were, in that most of what they're seeing is a gradual and blanket shift upwards in lots of sites; meaning the people that have lost hadn't seemed to have lost for any particular reason, it just that the sites around them have moved up which is very interesting and the especially interesting is the reversal as we spoke about at the Daily Mail traffic, is that it almost seems that they've rolled back whatever part of the algorithm did that because the increase in traffic is almost spot-on and actually there's some of the Daily Mail’s competitors who have actually also increased in search visibility so this means that the Daily Mail hasn't taken that from anyone else, they've all gained more search visibility.

Anyway, onto the first thing we wanted to talk about today which was the Googlebot user agent update. So back in May on episode 9 of the podcast, we covered Google's update of their actual Googlebot to what they were calling their evergreen Googlebot. So, the robot that does Google's crawling and discovers links and web pages for them, has historically been up until recently quite an old version of Chrome, so basically the robot is viewing the web as if it's on a really old version of Chrome and this gave some challenges to web developers in terms of it wasn't able to do all of the clever functionality that modern browsers could so there was various workarounds you had to use. Back in May is was when Google announced: ok what we're doing now with Googlebot is it's going to stay up to date with the latest chrome version, meaning that you can work on the assumption that pretty much anything someone can render in their browser, that Googlebot will be able to do it which was good news. It didn't solve some of the issues that we still face especially with JavaScript as SEO is in that there is still a big delay normally before javascript processed, so it's not doing that on the fly as your browser does and this just comes down to, this is a hugely resource intensive thing for Google to try and attempt and there's all kinds it opens up all kinds of complexities.

I was having a conversation today with one of our developers about the various security issues this raises for Google as well by actually, running javascript on their end so there's ways that very clever people can kind of escape the sandbox and actually get into Google's Network, that side of things is a hugely complicated area for them. So, what this bit of news is that Google's actually told us that they're updating now the user agent for Googlebot and what this means is if you haven't come across user agents before basically, your browser always sends what's called a user agent when it's connecting to a website, it is essentially a string of text that says who they are and what they're using. So for instance it'll tell you the browser and the operating system so when you connect to a website the user agent might say ‘Hi I'm the Firefox web browser and I'm running on Windows’ or ‘I'm the Safari browser and I'm running on an iPhone’ - now as you can imagine, this is how robots.txt basically works so the file you have on your website that sets the rules for how robots are allowed to interact with your website operates using these user agents, so robots like Google's Google BOTS have names and you can specify rules in your robots.txt that only apply to specific robots or a lot of the case we just use that wildcard asterisks in our robots.txt to say we want this to apply to all robots.

So each of these different robots have different names, well the good ones do, so obviously if you didn't, if you made your own web bought your own scraper you don't actually have to make it. I send its identity across as a user agent, so they're kind of naughty bots because they're not really obeying then the robots, you know it's very difficult for them to obey the robots.txt but historically again, Googlebot has had kind of this static string they use which identifies them. And what Google is changing is, it sounds incredibly minor, which is they're basically changing the version number of Googlebot to stay up-to-date, to be reflective of the version of Chrome it's running which makes you know, makes absolute sense. However, Google said ‘we've run an evaluation, so we're confident that most websites will not be affected by this change’ and when I did a little bit more investigation into this there are some websites that are making decisions about how to display content, based on the user agent they are detecting.

So Google says: ‘sites that follow our recommendations to use feature detection and progressive enhancement, instead of user agent sniffing should continue to work without any changes.’ So, I'll just dissect that for you very quickly, what they're saying there is, if you are trying to detect the user agent of whoever's requesting the webpage, this isn't really it and has never been a recommended way to go about things. So Google goes on to say ‘if your site looks for a specific user agent it may be affected’ so there are apparently a non insignificant amount of sites that kind of have this hard-coded string in where they're looking for Googlebot, which is kind of a shoddy practice, from day one looking at it that the version number of Googlebot will never change; obviously that's what version numbers are for, to demonstrate, to show change. But it does appear there are some sites that are doing this and they're showing content based on what version of Google bot or just if they they're detecting Googlebot.

So there's two recommendations here: the first is, if for whatever reason you're stuck in so much technical debt that you can't get around doing it any other way apart from trying to detect the user agent, then you should roll back to just searching for the string Googlebot as your user agent. So saying ‘okay, I want to match this rule’ if somewhere in the user agent it says Googlebot rather than the whole previous string that identified Googlebot because that's what's going to be changing. so as soon as that changes if you're using your static specific user agent Googlebot, what you're doing will break. So that's one way around you can just say “okay check if the bot has Googlebot in it.”

The other way that Google is kind of recommending is this feature detection, which actually isn't specifically trying to identify who Googlebot is, what you're doing is essentially getting your website to check whatever is connecting to it, whatever browsers connecting to it, that it supports all of the required features to display the website as it should be displayed. If it meets these criteria then actually, there shouldn't be any reason to serve a different version of this page to the bot. If you detect specific features aren't supported, that's when you can have this full back version for BOTS which is kind of the recommendations Google's going for. So Google has said that in their test of this because I guess like all things, Google is going to change, they probably just have like they take a chunk of the web and run it to see what happens just to make sure it's doing what they want to do and there aren't any unforeseen consequences.

They have said that they've had common issues while evaluating this change, that include pages that present an error message instead of the normal page contents; for example, a page may assume Googlebot is a user with an ad blocker and accidentally prevent it from accessing page contents. So what's happening there is that, they visited with the new user agent and the website's looking for this specific string of Googlebot which obviously it doesn't match anymore, but then the websites worked out that Googlebot or whatever this thing is visiting the site, which it doesn't think is Googlebot, doesn't actually support everything that it wants to do. So in this case, show ads; therefore it's coming to the termination that ‘okay it's not Googlebot therefore, it must be a user I can't show ads to therefore it's a user that's trying to block ads so I'm not going to show my content to them.’ You've probably all seen those sites where you land and it says something like ‘hey we've detected you're using an ad blocker you need to turn this off if you want to see our content because that's how we pay for it.’

If your site or if a site’s in that situation, it's going to be very bad because this user agent update will essentially mean you will block Google from seeing your content, which means it's not going to get indexed and if that content isn't in the index obviously, it's not going to rank so, that could be pretty bad. I will link to the Google Webmaster post that talks about the update of the user agent, they've got a link where it gives you an explanation on how to force change your user agent, so you can test your site as if you were the new Googlebot so you can make sure that this isn't going to affect you. So, if you heard whispers in your kind of IT team or if you're aware you're doing stuff with user agents, this is definitely a conversation that you want to have with your dev team.

I also want to do a follow-up this episode on nofollow links. It was back in episode 27, so three episodes ago, we covered the change Google made where they, firstly introduced two new types of, I guess you call them, two new types of nofollow link which were rel sponsored and rel UGC. So, when you want to mark links as they've been paid for or if you want to mark links as ‘hey these have been generated by a user so we can't vouch for them’ and obviously their previous, which have been used for as a blanket for all these cases, which was rel nofollow. The big change they came out from with this was, they're moving the nofollow tag from basically a directive to a hint. So, previously whenever you have used a nofollow tag, Google has always said: we will discount these links from our link graph, they're not going to carry any weight for search engines as you've marked them as, you know, either paid for or non trusted links so we're not going to use them - that was very clear.

It was over the last couple of years we've seen various reports and some people certainly more audible than others, talking about how they actually had seen some impact and I'd seen this discussed mainly around local SEO so talking about though the map results where they were seeing actually if we were getting nofollow links. It seems to be having a positive impact anyway but the thing that interested me about this and I've seen it, I saw it discussed actually on a webmaster hangout before with John Mueller from Google which was the use of nofollow links internally within a site. Google has previously kind of said it's you know, not to do that not in a way that they'll penalize you but just it wasn't kind of best practice and we're definitely getting mixed messages around that, so the the webmaster hangout and referring to was where John Mueller did actually put nofollow forward as a suggestion of how to tackle some specific internal site issues.

So, I just want to talk about that because they've given us an extra clarity on this now, so there are cases with especially larger sites and if we take like an e-commerce site as an example, where using nofollow links internally could be helpful. So with a small website, if you have products on that website and if you have faceted navigation where you have different variations of products you can order by size, by price, things like that, we all know that those small rejigs of pages are typically what we use the canonical tag for. The canonical tag is again a hint, but it allows you to say to Google ‘look this set of 10 pages for instance are just reordered versions of this main page that I want to rank’ and this is a really helpful tag - it helps you rank a lot better, it tidies up loads of issues of multiple competing pages ranking trying to rank against each other in the SERPs, it consolidates link incoming link data, it's really really helpful.

However there still exists a problem where other means need to be used. So if you have a particularly large e-commerce site, so we'll say you know, in the tens of thousands of products, and also each of those products have multiple variations or at least different faceted navigation ways of accessing essentially the same information but tweaked or different orders etc and if those facets are easily crawlable by Google, you have a situation where only a small percentage of the total amount of pages that are crawled are ones that you're interested in ranking. The other 90% of pages Google might be crawling are actually just the kind of trash versions that you're not interested in ranking, so the reordered pages or the filtered pages and things like that. So this becomes an issue in that if you just say ‘okay we'll slap a canonical tag on there’, it still raises the issue of Google trying to crawl these pages and kind of getting stuck down these very deep holes of hundreds of thousands of variations of pages before it eventually decides ‘I've, you know, your site I've kind of had enough of it it's about is important I've looked as far as I can do for how important I think your site is’ so this becomes a crawl budget problem essentially, because Google has finite resources and at some point it's got to stop crawling your site. We all know that's partially based on how important your site is based on, you know, how many and how powerful these links are to come in to your sites, as to how regularly and deeply it's crawled.

So in these cases I've seen people with good success, use things like robots.txt and no index to try and block off some of these rabbit holes that Googlebot can go down because the end result is then Google is spending a long time - it's actually discovering all of their kind of important core pages - is spending longer on them and they just tend to rank better. Now, I saw this answer because some people have been using nofollow links internally on their site to try and roadblock Google off from some of these these avenues and this is what kind of John Mueller got into it recently with some SEOs about how to use nofollow internally. So I'm just going to quote him now to give you what he said so his answer was ‘it's not 100% to find’ - which is always helpful - ‘it's not 100% defined, but the plan is to make it so that you don't have to make any changes, so that we will continue to use these internal nofollow links as a sign that you're telling us, one) these pages are not as interesting, two) Google doesn't need to crawl them and three) they don't need to be used for ranking for indexing.’

So, what John is essentially saying there is it looks as if if you're using these nofollow links internally they're almost always going to be counted as a directive. There's still a hint I guess, technically, but it looks like because they've been used internally Google's saying that's a very strong hint that these pages perhaps don't need to be crawled and he actually does clarify that and says so it's not 100% directive like robots.txt where you say these are never going to be crawled but it does tell us we don't need to focus on them as much. So to me, that's saying; this is still a very valid way to control how larger sites are being crawled and actually it doesn't have to be a case of one or the other so you can still use canonical tags on these variation pages so if Google does land on them it's still getting that hint that it's not a canonical page because it's got this tag on. But it is saying if you've got this easily crawlable faceted navigation that nofollow would still likely be a good shout to control how Google is crawling the site.

Okay, so this episode we have actually got some time to do some Q&A; which is good because I put there, I put the forum out on Twitter and I think on LinkedIn as well just to invite people to ask questions; if they have any questions about SEO or PPC because we do get quite a few actually, I get asked quite a few, sometimes through LinkedIn or through Twitter and it can be hard to give people the full and kind of, correct answer very shortly because a lot of the time you know, there's that thing about it depends is normally the best short SEO answer and probably in PPC a lot of the time as well. So it is helpful if we can sort of talk this through. So I'm gonna do, I think what time for probably a couple of questions; so the first question we had was from Shannon, I've no idea who Shannon is, so you can submit these questions anonymously or tell us your name, tell us way from so Shannon asks: what's the best way to structure your PPC account/campaigns for a franchise businesses with multiple locations and this Rob, is why you are here so I can now just look in your direction for an answer to that. So what IS the best way to structure PPC accounts or campaigns for franchise businesses with multiple locations Rob?

RL: Well Shannon the answer to that is, it depends.

MC: Aha, yes we will just leave it at that.

RL: Aha, I’m joking. Just when you say franchise I'm going to assume that you are part of a franchise or that you oversee a group of franchisees within overall franchise and I think the first thing to consider is how you would administrate that from a pay-per-click perspective because presumably the franchise and comprises multiple stores or branches or whatever your franchise is. So I think the main question is: are you going to take ownership of the marketing for each franchisee and cover all of their costs or are you going to leave the pay-per-click marketing in the hands of each branch? I'm guessing that's not an issue but I just thought it was an interesting question to raise.

So in my opinion if I were a franchisor or I owned a franchise it would make sense for me to centralize everything, so that all the marketing has a clear message otherwise you're going to get disparate messaging and pay-per-click advertising techniques amongst different parts of the franchise which in my opinion would be a bit of a headache and if you've franchised your own business then I'm guessing that the success of the various franchisees is going to impact your own gross profit so I would want to retain control and control the marketing for all of the various franchisees within the franchise. Which actually might also be a selling point for selling the franchise to potential investors who are wanting to set up, you know, we take care of all of the marketing.

So, I've probably gone off on a bit of a tangent there but I just thought it was an interesting one to consider and for when it comes to administrating that there's two methods; the first is using one single Google Ads account, where you'd have various campaigns controlling the different franchisees and the different locations that they service and the second one would be to have a separate Google Ads account for each franchisee or branch, which is controlled or managed or overseen via a central MCC, which stands for my client center - which is a tool that most digital agencies use to manage multiple client accounts.

MC: So doing it that way as well would mean that each individual franchisee uses their own payment information right? Because it’s essentially then they've all got their own account, whereas if you had one Google account, I assume that means everything has got to be paid for by one person.

RL: Potentially. You could pay for everything under the same roof. There's no, you know, really big brands, I'm just trying to think of a really big franchise like KFC for example, there's a huge franchise comprised of multiple branches. It might be that they have a single MCC that controls many different accounts but which is all paid for under the same…

MC: But it gives you that option right?

RL: Of course.

MC: Do you have the option to have multiple different, say cards for payment if you use the one Google account?

RL: Yeah absolutely, yeah or you can use the same payment card or even invoicing. It might be that you just generate one invoice for each, for all of the franchisees within that one MCC. I think, I mean I don't know who Shannon is and what the franchise is but I'm probably guessing that an MCC may be overkill, in this case.

MC: Ok.

RL: It's probably a case of multiple branches which could potentially be controlled via a single account, if I've got the wrong end of the stick though and I'm speaking to the colonel from KFC or whoever, then it might be that an MCC, where each franchisee has its own separate account because each franchisee has its own budget perhaps the manager of that franchisee - am I using the right word their ‘franchisee’? Perhaps the manager of each branch has their own budget and they pay for the marketing and they're responsible in which case, perhaps having an MCC, an override, an overall top level view of every account is the way forward.

MC: Okay.

RL: Having said that that's not really answered the question, which is how should you structure your accounts? So, there's so many different ways of doing it, but am I right in saying a franchise is predominantly location based?

MC: I think that would be fair, yeah.

RL: Unless it's an online franchise maybe which I guess is different. a whole other…

MC: I haven't really thought about that really, in terms of online, because yeah historically the franchises I worked with before are there, they're given like a territory...

RL: Exactly yes.

MC: So I don't know how it’d work online.

RL: I would say that the best for anything that's location based where you're covering a specific area so you've got multiple franchisees within your franchise brand, each franchisee is going to cover a specific geographic area. so to me it would make sense in your Google Ads account to structure your campaigns by location. Now, you could really you could go into so much detail here but I'm guessing if you're managing a franchise then your time is limited and if you've drawn the pay-per-click short straw, you're not going to want to be managing hundreds and hundreds of pay-per-click campaigns.

But, when we talk about pay-per-click campaigns, really a campaign should relate to the goal of that activity, so I'm going to use an example of a car garage, for example; you may sell MOTS and you may sell general car servicing. Well, those are two different products, really two different goals, two different services - that to me warrants two separate campaigns. Now, if you're a franchise that has branches in different locations, in my opinion, each location as well should be a separate campaign. So, you might service an area in Scotland and you may offer MOTS and you may other offer general car servicing so car servicing and MOTs; you'd have two campaigns for Scotland and then if you also have a branch in Essex doing exactly the same thing, you'd have a campaign for Essex but you'd have two campaigns for Essex; one doing MOTs and one doing car servicing. But you can get really granular of this and I'm, I'm probably guilty of having, being a bit of a campaign, I like my campaigns basically - to put it politely and...

MC: I could see you searching for a word there.

RL: Yeah I use campaigns to siphon spend towards search terms, that are the core search terms that work really well, that consistently drive leads, but as I say if you're busy and if you're not a full-time pay-per-click manager that might be overkill. I would also say, just I suppose, just a backtrack - what I'm saying is, I would have a campaign for each location and I'm presuming that the location to the specific franchise branch and I would have a campaign dedicated as well, in addition to the location, for the specific service that the franchise offers.

MC: That makes sense and then obviously if it is a larger franchise that's what you'd start thinking about using MCCs, because I can imagine that if you've got half a dozen or a dozen services and then lots of areas, it is going to get quite unwieldy, quite quickly.

RL: Yeah, but if time is a premium which I'm guessing it is because everyone's really busy, an MCC will probably add an additional element of management on top.

MC: Yeah.

RL: You'll need to make sure that you don't double serve and double serving is when you're showing adverts, more than one advert, in the search results for the same search query and I'm not talking about if you're an ecommerce website and you have a shopping advert showing at the same time as a text advert. I'm talking about having two text adverts showing at the same time, for the same search query, to the same person. So if someone typed in car mo T's they could have your advert show twice; what one at the top and one underneath - yet Google doesn't like that and that's a massive form of pay-per-click black hat marketing which Google frowns upon.

Pay-per-click does have black hat marketing, it's not just limited to SEO, That's a whole other podcast thing.

MC: Yeah I want to get into that, I want to get into that right now but no let's talk about that after this and then I think that’ll make maybe a couple of cool shows, if we do one on black hat SEO and you can bring your black hat bag of shady people.

RL: I've had to deal with other agencies that have reporting's Google, some very interesting, some very fun but anyway yeah, if you have an MCC and you're managing a franchise and it's a location-based franchise with various branches in different regions and you're using an MCC just be very careful that your various accounts are really limited to the areas. You don't want adverts overlapping with one another which to be honest, bearing in mind various updates to location targeting settings that Google has implemented recently, is quite likely, it could potentially happen…

MC: Yeah, we’ve spoken about that recently. The default setting for location; we keep banging on about it but if you haven't heard about it, it's helpful. So the default location targeting setting is not only if you're in that area, it's if you've shown an interest in that area, so you can technically be kind of, anywhere but if you've done a search for that place you're then eligible for those ads; so the double serving thing is actually if you've got the defaults left on. Quite a likely scenario, isn't it? By accident.

RL: Yeah, yeah I mean; give an example let's just say you're a barber's that's based in Soho and you're advertising to people in Soho who are looking for hairdressers. Someone might live in Soho but they may have gone to India temporarily for a holiday and they're looking for a hairdressers in India and they type in hairdressers and there shown an advert for hairdressers in Soho - I was quite far away from Soho because here,

MC: Yeah, because of the previous history.

RL: So yeah, you just need to keep your wits about you and look at your location data and that's something else to bear in mind if you're a franchise is to just make sure, if it's very important, if the location that you're targeting is important, just take a look at your location traffic and make sure that you're not getting traffic from places where you know your branch isn't going to be able to service. Make sure you're excluding that and only targeting the areas that matter to you.

MC: I want to do one more question and this one actually came up from an article. I believe it was an article on search engine roundtable and someone asked me this on LinkedIn. So this was from Matthew Mountain who says ‘I hear that HTML sitemaps are of no use anymore, what are your thoughts on this?’ and he was actually referencing, there was an article entitled: Google HTML sitemaps not worthwhile for SEO purposes - was the title of the post and if you, if you read the post, it sort of says what I'm about to say but I want to frame this because I think that title is, I think they're a bit guilty of being a bit misleading with that to make it sound a bit more interesting than it really is.

So the title suggests that if you have an on site sitemap, that this is going to have no SEO benefit. Now, of course the correct answer is it depends. But I want to go through a couple of things here because I think it deserves just opening up and talking about. So in terms of sitemaps, you've essentially got two options which is kind of the official XML external sitemap, which is when you provides an external XML file which is in a specific format that lists all of the important pages of your site and most major search engines support these and they will use it in their discovery and they're kind of ranking of importance of pages on your site. That's fine. From antiquity, we have the HTML sitemap which is a kind of a rare beast nowadays, which was normally a page that would be called sitemap on your site and it would essentially be list of all of the pages of your site that you can click towards - which I used to actually find quite useful when sites were a bit rubbish because you just go to the sitemap, hit ctrl F, type the name of the page you’re looking for...

RL: I still find myself navigating to those pages when they exist on the site, I'm a big fan of them.

MC: It’s really interesting, all this time and money and effort going into design and people are like, just give me the list of pages and I'll find it. Yeah, so this headline says ‘HTML sitemaps not worthwhile for SEO purposes’ - so the first thing I want to get across to you as a thought is, an HTML sitemap is not any kind of special page, okay. It’s just like any other page on your site, it's just an HTML page with links okay. So, we now live in a time where search engines are pretty good at discovering the different pages on your site and if you have a website, I would say that is good for users which means you thought about the navigation structure and you've thought about you know, useful pages no being 14 clicks away then these search engines will easily find all of the pages on your site and there is essentially no need for a sitemap, an HTML sitemap, because you're already you've done the job of an HTML sitemap. So, the job of an HTML sitemap was kind of twofol, which is one) it was to give search engines an easy route to all the pages on your site and the second benefit of an HTML sitemap was you could specify the anchor text - so the anchor text is the text in the link that describes the page you're going to. So it might you know say, accessories or products or home page - it tells you the name of the page you're going to and search engines use anchor text to help them understand what the next following page is about.

They're less reliant on it than they used to be externally from other sites but they still lean on it very heavily internally. So this did give you the option to do that and it was very helpful especially when search engines basically weren't as good at getting around websites because you could be like look here's a really simple HTML page you can't go wrong with it it's nothing flashy it just works. Because now as a community of web developers and SEOs and because of the technology progression of the crawlers, this generally isn't a problem anymore. It's very rare that you encounter websites where search engines just can't get around them.

If you've got a well structured main menu on your page, that's essentially a list of links with anchor text just as a sitemap would be. So to say that an HTML sitemap is of no use; it’s not worthwhile, I don't think it's true because if you had a site that was awful right, and Google or being whatever struggle getting to all the pages, having an HTML sitemap would definitely help you because you would be, you would be giving the context of those pages and you'd be showing it around. If you already, if you're in a situation where you need an HTML sitemap then quite frankly your website probably sucks and your SEO is terrible and you shouldn't be trying to think of that as the solution like ‘oh our websites terrible, users can't get around it search engines can't get around it; I know I'll put a sitemap page on.’ So I can fully understand Google's statement of basically look don't you know, a sitemap isn't gonna magically fix anything for SEO purposes, if you need one you've probably got way way bigger problems that you need to tackle and I think that's what I want to get across in that it's not that they've been devalued, I think they, they're just treated and they always have been treated as normal pages - Google hasn't flicked a switch recently to say up HTML sitemaps aren't worthwhile anymore, it's just that we've grown past needing them and if you do need one, you got issues.

Okay that's everything we've got time for on this episode, thank you again Rob for joining me and answering PPC questions.

RL: It’s always a pleasure.

MC: I look forward to our black hat PPC episode, so we are going to do that and we'll talk about that later. We are going to be back on October the 14th, so Monday October 14th, with our next episode. You can get all of the links to everything we spoken about on this episode at our show notes which are at search.withcandour.co.uk - I hope you'll tune in next week! Have a great week.

More from the blog