Candour

Episode 72: Buy with Google, internal ranking documents and infinite scroll CLS

Play this episode:

Or get it on:

What's in this episode?

In this episode, you will hear Mark Williams-Cook talking about Buy with Google: Google opening up their checkout to third parties, internal ranking documents: Have Google been lying to us about how they use click data? and Infinite scroll and CLS: Tips on how to improve your CLS score with infinite scroll sites.

Show notes

https://blog.google/products/shopping/buy-on-google-is-zero-commission/

http://www.google.com/retail/listings

https://judiciary.house.gov/online-platforms-and-market-power/

https://twitter.com/randfish/status/1288982615110725632

Transcription

MC: Welcome to episode 72 of the Search with Candour podcast, recorded on Sunday the 2nd of August 2020. My name is Mark Williams-Cook and today I'm going to be talking about Buy on Google, the steps Google are taking to push ecom further into their ecosystem, the internal documents that were shared during the US online marketplace dominance hearing about Google and a little bit about CLS and infinite scroll, so how CLS is impacting modern web development practices.

Buy on Google is now open and commission free, so what does this mean? On the 23rd of July, Google posted this: ‘Over the past few months we've made significant changes to help businesses reach more consumers and help people find the best products, prices and places to buy online. We've made it free for retailers to list products on Google shopping in the US and we brought these free listings to search as well.’ So that's referring to the changes we covered a few episodes ago (Episode 58), whereby Google has started making their Merchant Centre provided feeds, so what were only paid PLAs - the product listing ads - in Google ads, those are actually now showing organically for free, currently in the US and that's going to roll out internationally as well. Google goes on to say, ‘today we're taking another important step to make it easier for retailers to sell on Google. Soon, sellers who participate in our Buy on Google checkout experience will no longer have to pay us a commission fee, and we're giving retailers more choice by opening our platform to third-party providers starting with paypal and shopify. so this is what the title of this post is referring to when it says, Buy on Google is now open.

They go on to say, ‘these changes are about providing all businesses from small stores to national chains and online marketplaces the best place to connect with customers, regardless of where purchase eventually occurs. With more products and stores available for discovery and the option to buy directly on Google, or on a retailer's site, shoppers will have more choice across the board. Here's more on what's new for retailers, so zero commission fees when customers buy your products on Google. While retailers have several options for driving traffic to their website with free listings or with shopping ads, many also use buy on Google to give shoppers a convenient way to purchase something right when they discover it. By removing our commission fees, we're lowering the cost of doing business and making it even easier for retailers of all sizes to sell directly on Google starting with a pilot that will expand to all eligible sellers in the US, over the next coming months.’ There's a link here to learn more about the requirements for the pilot, so there's also a link to sign up, so you can actually sign up if you meet those requirements to join the waitlist - so at the show notes at search.withcandour.co.uk - I'll put a link to the requirements so you can see if you're eligible and the link to the waitlist as well if you wanted to get involved in the pilot.

Next they say, ‘bring your own third party providers starting with paypal and shopify. we've heard from retailers that they want the ability to choose their preferred services for things like payment processing inventory and order management, that's why we're opening our platform to more digital commerce providers beginning with shopify for inventory and order management, and paypal and shopify for payment processing. so if a retailer wants to sell directly on Google they can get started even faster and continue using the tools and services that already work for their business. Or if they're new to selling online, they'll be able to choose from multiple options when they sign up in our merchant center.’ This is where it gets interesting so the next point Google makes here is, ‘import your inventory with just a few Clicks. To simplify our tools and make them more compatible with merchants existing processors we're enabling commonly used product feed formats. This means retailers can connect their inventory to sell directly on Google without having to reformat their data. We're also adding an option to let retailers add product information, like images or technical specs, by pulling from our existing database rather than have to upload it themselves.’ So I had a little dig around the technical requirements and the eligibility criteria here and what they mean by ‘commonly used product feed format’ is and basically if you're providing an Amazon formatted feed that this Google service will work directly with Amazon. So it's quite clear to me that they want to make this, hey come and sell directly Buy on Google, as well at least or instead of, Amazon because we've got no commission, very, very easy to choose from. So you know, it doesn't cost you anything, in terms of the actual commission, and it's not going to really cost you anything in terms of setup if you don't have to reformat the feed.

They also lastly finish with, ‘more products, more sellers, more choice. As we've made it easier for a broader set of retailers to sell on Google this year, we're also seeing a significant increase in demand to buy from and support small businesses. To help people discover these smaller merchants, we also plan to add a new small business filter on the Google shopping tab and we'll continue adding features to help small businesses participate in commerce online. Everything we're announcing today will roll out first in the US and we're looking toward international launches later this year and in 2021. While we still have much work ahead of us, our goal is to make digital commerce more accessible for retailers of all sizes, all around the world…’ uh yada yada.

I think it's fairly obvious what's happening here. So, it's commonly felt, thought, known, that Google sees Amazon as one of its largest competitors in terms of there are many people who begin now, if they have a commerce focused activity to do they want to buy something, they will start that search on amazon, rather than Google, because obviously people have amazon accounts, you know it's the search is good you've got things like amazon prime. Whereas Google obviously had previously with their paid listing ads, their shopping ads, the inventory range was much much smaller and you were being sent off into individual stores like a silo. So if you were trying to look for a specific product and compare prices on Google, it was very much you do a search, you go down one rabbit hole look at a site, and then you come back to Google, and then you look at another one and you come back and look at another one.

So the first thing Google's positioning itself doing, I think with the PLAs and then now making the PLAs free - I should say free again because the shopping fees were free originally, but at least making them organic, they've massively increased the amount of inventory they can show within that shopping experience. So that shopping experience in Google has got all the built-in filters around things like price, and brand, that people like, want, and need when they're buying online. This step is another step that removes that last off-site bit of friction which is, while you can browse all of those products within Google shopping, the integration in terms of - okay well I want this one, I want to buy it - Google realises now if you start sending people off to third-party sites, they can't guarantee what experience that's going to be. Yeah, someone can pay for a Google ad, and I had this experience actually myself, which was a company that was paying for Google ads, I went to the site and basically I couldn't buy what they were offering, whereas it's very unlikely to happen with Amazon. So it seems quite an aggressive approach to me which is they're making it commission free, they're making it work with other providers, with other formats, because they want to grab this merchant market share and give people the option to sell on Google. Potentially a good thing, I think, for merchants especially with things like filters for smaller businesses, I know a lot of the complaints about Amazon are to do with Amazon either looking at product data and making their own versions of successful products, or kind of counterfeit things appearing usually from China listed on Amazon. So being able to filter to support local businesses I think potentially something customers might like and more choice between you know, even if it is just between Amazon and Google, probably isn't a bad thing, but it's certainly, I think, very important in terms of SEO, organic search, PPC and digital marketing, as a whole which is we're seeing these very focused action commerce based terms now that whole experience is being tied up in one platform whether that's Google or whether that's Amazon.

I think this is kind of a logical thing to talk about next, interesting timing. so some of you may have heard or like me, some of you may have been watching the hearing that's going on in the US currently, which is the online platforms and market power examining the dominance of Amazon, Apple, Facebook, and Google. i'm not going to go into my kind of thoughts or opinions on that hearing, i think you know everyone could talk about that for a long while but that's something that's happening at the moment. so this this dominance of these big four companies is being explored and they're talking about potential anti-competitive practices and monopolies etc and as part of this hearing those companies have provided certain evidence and documents, and what I want to talk about specifically is some of the Google documents that were surfaced - so you can actually download the documents that are used in the hearing, so again, i'll put a link to this in the show notes at search.withcandour.co.uk if you'd like to have a look at these documents yourself.

A couple of caveats is that while some of it is redacted it's only as far as I can see kind of personal information but they are quite old so looking at the Google documents I'm about to talk about, it looks like these documents themselves are, I think from about 2006. So in terms especially of SEO, this is 14 years ago so these are very, very, very old documents, and why I wanted to talk about them is one specific page in these documents from Google came up and is being talked about within the SEO community. So I'm going to just read the excerpt from the page of the Google document that's caused this discussion.

So the heading is called ‘Continued investment in search quality’ so this is some internal documentation from Google where they're discussing what they're doing with search quality. It starts with, ‘Observation: we have many promising ranking initiatives underway for the coming year. Recent gains indicate there is significantly more possible on core ranking. initiatives include...’ and then it's got a bullet point of a dozen or so things, so it says ‘Continued investment in our investment in our core ranking via query and document understanding. Continued investment in user signals like clicks. Our search users create the first level of network effect for search quality and we are investing in this heavily. Hard queries: queries for which users are frustrated even when they have told Google all they could, there is a strong effort to improve user experience for such queries. Query structure analysis: identify different types and look at past usage of those queries to improve ranking. Suggestions for popular queries: rank boost; continue developing our learning system to take human rating data as input and predict new ranking signals. Non-web ranking in preparation for universal search improves and standardises ranking for other properties by applying tried and true web search techniques augmented with domain information. And the last point is ‘continued work on personal personalisation.’

So how this has been framed within the SEO community is there was a tweet that summarised this excerpt of the document that said, ‘SEOs are going to love this one. Yes, Google uses user signals like clicks. Yes, Google has a measure of domain authority, and yes, they machine learn against human rating data. All those denials, all those years but here it is all laid out in internal docs.’ So I just wanted to go through the way this information has been framed because I think it's easy to maybe misconstrue what - these documents here these bullet points are fairly vague my opinion is they're not telling us anything particularly new so things like user signals like clicks, Google using these the best example i can think of and I've referenced it before is a really fantastic talk by Paul Harr who is a Google engineer, in 2016 - so I mean that's already four years ago now - and he gave a really nice in-depth, well in-depth from outsider layman point of view i guess, view on how what the engineers are doing at Google and specifically how Google's working and what kind of experiments they're running and in that talk, he specifically talked about user clicks being part of the search quality improvement process. so in terms of both ranking, getting ranking right, and even when they're doing things testing different layouts, they will look at user clicks.

Interestingly, Paul said that while on the surface when you think about it, you might think well user clicks are a really really good source of actionable information, he was saying it's much harder to derive actionable things from user clicks than you might think. It's actually quite complicated. The document is not saying that Google is using user signals, like clicks, to rank individual websites and I think that's a really important thing to take away from this. So yes, absolutely Google does use user signals like clicks in a hole for their search quality, I know this because Google have told us this quite openly and they've given specific instances for this. Interestingly, I saw Bill Slawski - again, posted some patents that describe Google using clicks such as ranking entities in carousels, selecting top stories to display in carousels, selecting search suggestions in autocomplete dropdowns, continuing to show specific one box results, showing local organic results pursuant to the venice update, document quality under some approaches, click rates and click durations may be used to identify web spam, and ranking based on categorical quality.

So there's lots of uses there outside of core ranking and listening as well, going back to Paul's talk, it was very interesting to just hear about the separation in how the core ranking calculations are done and then, for instance, things like certain search features, like featured snippets, are kind of calculated if that's the correct term, after that ranking is returned. So I think there is this firstly, this misunderstanding maybe, or just difference of opinion what people are talking about when they say “the algorithm”, because I've noticed Google is quite specific sometimes when they're saying that's not part of our core ranking algorithm or our ranking algorithm. So they may actually be talking about a specific part of the overall set of calculations that go into producing the end results that we as users see in the serps.

The second point in this tweet was saying, yes Google has a measure of domain authority. I did download and I read through all these documents myself. I couldn't find any mention of domain authority in them, I think this document mentions this augmented with domain information, so I think that's quite a big leap maybe to say that domain information is equal to a concept of domain authority. Lastly it says, the tweet about this document says, yes they machine learn against human rating data. So this is talking about things like the web quality raters which we've spoken about before, which is this system Google have groups of users to manually rate websites against specific queries and mark them as to whether they're highly relevant or spam etc., and then they're essentially using this human data to test how good a job their algorithms are doing. So if their algorithms are identifying some set of particular pages as good, and human raters are saying they're a bad match, they can start to use this data to work out where they're going wrong. Certainly, what I think they're doing there is, as humans we all have ways that we consciously and subconsciously decide whether a page is good as a whole, or if it's a good match to a specific thing that we want and the value of having this is that they're essentially getting sets of labeled data for their algorithm to compare to. So if there's whole clusters of pages that they're getting wrong, they can start running algorithms on these pages to try and pull out other factors that maybe the algorithm is currently overlooking, whether you want to call those variables, or ranking factors, or metrics, or whatever.

I've heard Google publicly talk about which is saying they could. I didn't actually hear them saying they did, but they said they could use that data for machine learning to try and improve search quality. So you know all of this, I'll put a big warning label on it for you, all of this is my opinion, my interpretation, based on things I've heard over the last 15 years working in SEO, listening to Google listening to Matt Cutts, listening to John Mueller, listening to Martin Split, all the different people we're hearing from directly from Google, reading as much as I can from Google, reading the patents, talking to people like Bill who really specialises in that, talking to other SEOs. To me, that kind of statement, how it's framed, as if this is some big reveal of Google doing this or doing that just isn't quite right, in my opinion, i don't think there's anything particularly new here. It's worth mentioning as well, as Bill always does, just because something is in a patent doesn't necessarily mean it's implemented or implemented exactly in that way, but there's nothing really groundbreaking there for me. Yes, Google does use user signals with the overall search quality, I don't think they're using them directly to rank websites.

We've had discussions previously on this podcast as well about twiddlers and the other add-on bits that Google seems to have on its algorithm which contribute to switching around rankings and I've definitely seen those studies where they've showed rankings changing when lots of clicks are applied to specific results and I certainly think that's something Google takes into account to try and ride the wave of if something's big in the news, it does shift the intent and they do have algorithms that seem to follow that. So you know queries like Halloween, for example, the types of sites that Google returns for that query does change throughout the year as the majority of the user intent behind that search changes.

So I guess it would just be, take it within your stride, even if those things were true it probably shouldn't really change what you're actually doing day to day on SEO and I guess the only thing I’d say on that is that Google, of course, does have a very strong interest in us not knowing exactly what they're doing and again, that's something they've been fairly open about. So in the their new podcast they're doing with John, Gary and Martin, Gary specifically said well this certain type of information could be exploited - I’m paraphrasing there - but they don't always go into the exact detail and the statements are carefully worded, but I don't think there's any sort of particular big new information in this document, but still interesting, go and have a read of it make your own mind up and if you disagree of course, do let me know, do tweet me, always happy to have these conversations with you.

On the last segment of this podcast I wanted to talk a little bit more about cumulative layout shift, CLS, so if you haven't heard about this this is one of the three webcore vitals metrics that Google is going to be integrating into their algorithm in 2021 and it's one of three metrics that they believe are a good general metric to measure user experience that they can roll out across all different websites. We did cover it twice already on the podcast, so when they were first announced, and a little bit more about how they're going to be integrated into the ranking.

So CLS specifically stands for cumulative layout shift, and it's a measure of how much your website is shifting around as it loads or on interaction. So the more shifting around it does, that's seen as a negative thing in terms of user experience. So quite commonly I’ve seen it happen on very advert heavy websites, where you click in on some kind of clickbait advert, you start reading the article, maybe they want to get you to click through one of these slideshows or next lists, and just before you go and click everything some more stuff loads in and you end up clicking on the ad or something, it can be really frustrating.

So I found a really interesting post about CLS and modern web development. It's by Adi Osmani, who is the engineering manager who's working on Google chrome and his post was entitled, infinite scroll with layout shifts, and I found this really interesting. So he talks about the three patterns that we have with content and how we organise large amounts of content. So I'll go through his definitions of these just to give you a little intro as to what the post is about. So he talks about pagination, so if you're not aware of pagination this is the dividing the content of a site or search results into pages, it's still the most popular strategy in terms of user experience. Pagination gives us a sense of a specific location, such as a URL, and a choice of where to go next, it's a model that works well for accessibility and SEO and it's widely used. Because pagination requires a click to navigate to the next page, there's an argument it has more friction for engagement compared to infinite scrolling on mobile, but your mileage may vary. One of the other techniques we use is load more; so load more is a hybrid between pagination and infinite scrolling. A user must click or tap a load more button for new content to be loaded in. It gives users a feeling of control over the content with more logical breaks, it also has the benefit of letting users pause at the footer before deciding to load more content in. And then finally, we've got infinite scroll; so infinite scroll prohibits the user from reaching the footer of the page in many implementations, I've certainly seen that before where I've wanted to click a footer and I've had to try and chase it down the page and never got it. So infinite scroll continues pushing this content down and therefore can cause layout shifts, and in fact, this is one of the main design challenges of infinite scrolling, as items are constantly loaded as the user reaches the bottom of the list, the user can see the footer for second or two before the next collection of results is loaded and the footer is moved out of view. It's not uncommon to see sites include a list of links, newsletter dialogues, or social media call-outs in their footers, but as this content keeps getting pushed down on the scroll it can make your cumulative layout shift score worse. This can also be seen in sites with load more if content is in the footer. Then goes on and gives some really nice video demonstrations of different sites and how they're handling infinite scroll.

So I'm going to link to this post in the show notes at search.withcandour.co.uk and I really recommend you get your devs to have a look at this, or have a look at yourself if you are doing your own development. Adi bores it down to three tips which I'll go through now to help your CLS score, in terms of infinite scroll. Which is, number one, to reserve enough space for content that it may be loaded in before the user scrolls to that part of the page. This can be achieved in a number of ways including, via skeleton placeholders for content that may require data fetches to complete before anything can be rendered. So he was giving examples in the videos of Facebook setting these placeholders but them not being large enough for the content that was being loaded in. So as you scrolled and the placeholders were filled, it still caused a cumulative layout shift. Secondly, remove the footer or any dom elements at the bottom of the page that may be pushed down by content loading in. This limits the impact on CLS and I think that's a really good and really simple tip, a lot of people can have, with infinite scrolling because if you've got an infinite scroll that footer is obviously useless because users aren't going to be able to catch it, so on the pages you're implementing infinite scroll, it'll be a good idea to remove the footer and even if you're not quite there with the placeholders you're going to improve your CLS score and maybe frustrate your users less. Lastly he said prefetch data and images for below the fold content so that by the time a user scrolls that far, it's already there. This approach is more complex but goes beyond just reserving space for the next sets of content because there's a good chance it's already been fetched.

So the post is really good, as I said, Ii'll link to it in the show notes, go and read it, get your developers to read it, talk to them about it. It's a good time to start getting everyone on board with these webcore vitals, so largest contentful paint, the delay for time to interaction and this CLS - so start get having these conversations with development teams, with internal developers now, let them know what those metrics are, get them to be measuring them on sites, let them know it's going to have an SEO impact in 2021, and it's a good set of three metrics we can use for user experience that aren't just based around speed. So speed does not equal good user performance, they're different they're different things. So get your devs to read it, read it yourself and start those discussions.

So that's everything for this episode, I'll be back on Monday the 10th of August. Please tune in then, please subscribe, leave a review if you're enjoying the podcast and otherwise have a brilliant week.

More from the blog