Candour

Episode 66 - SEO, information retrieval and foraging with Dawn Anderson

Play this episode:

Or get it on:

What's in this episode?

In this episode, you will hear Mark Williams-Cook talking to Dawn Anderson about information retrieval (IR) and why SEOs should be learning more about it as a topic to become better at what they do. They discuss how IR is being used to identify authors, IR resources for SEOs to follow, the use of Human In The Loop in ML/IR and information foraging and value scent.

Show notes

Soloman Golum Algorut

https://en.wikipedia.org/wiki/Solomon_W._Golomb

https://en.wikipedia.org/wiki/Golomb_coding

https://en.wikipedia.org/wiki/Lossless_compression

https://www.sciencedirect.com/topics/computer-science/compression-algorithm

Oren Halvani (Author Verification Algorithms)

https://scholar.google.com/citations?user=RrnzzhQAAAAJ&hl=en&oi=ao

https://www.sciencedirect.com/science/article/pii/S1742287616000074

https://dl.acm.org/doi/abs/10.1145/3098954.3104050

https://arxiv.org/abs/1706.00516

Interview with Dr Susan Dumais (Microsoft (one of the inventors of LSI and also a huge researcher on personalised search / conversational search / also behind ‘the Vocabulary Problem paper)

https://www.microsoft.com/en-us/research/podcast/hci-ir-and-the-search-for-better-search-with-dr-susan-dumais/

Professor Bass (University of Frieburg and formerly Max Planck Institute)

Series of 18 lectures (2017 / 2018) with downloadable materials open sourced for all:

https://www.youtube.com/playlist?list=PLfgMNKpBVg4V8GtMB7eUrTyvITri8WF7i

Stanford IR Book (Co-authored by Prabhakar Raghavan (Google’s new Head of Search)

https://nlp.stanford.edu/IR-book/pdf/irbookonlinereading.pdf

Book by Bruce Croft (a huge leader in IR from The University of Massachusetts)

https://ciir.cs.umass.edu/irbook/

Modern Information Retrieval (Ricardo Baeza-Yeates)

http://grupoweb.upf.edu/mir2ed/authors.php

The Zipfian Mystery

https://www.youtube.com/watch?v=fCn8zs912OE

Foraging theory

https://www.interaction-design.org/literature/book/the-glossary-of-human-computer-interaction/information-foraging-theory

Information on Mohammad Aliannejadi

https://t.co/2LA8lHOKoq?amp=1

https://t.co/HfhPxzu8O2?amp=1

More recent work by David Maxwell:

http://www.dmax.org.uk/publications/

Information about scents

http://www.dmax.org.uk/publications/ecir-serp-stopping/

http://www.dmax.org.uk/publications/temporal-delays/

Transcription

MC: Welcome to episode 66 of the Search with Candour podcast. Recorded on Friday the 19th of June 2020. My name is Mark Williams-Cook and today I'm joined by Dawn Anderson, who's Managing Director of Bertey and we're going to be talking about SEO, information retrieval, and foraging theory.

Dawn, thank you so much for taking the time to join us today on the podcast. As usual, I would love to give you a few moments to introduce yourself, if there's anyone in the SEO community who doesn't know who you are, here’s your chance to introduce yourself and let them know a bit about you.

DA: Hi Mark. Thank you for inviting me on this podcast. I've been a bit quiet of late, so it's nice to actually speak to people. I'm Dawn Anderson, I've been doing SEO for 13 years now and I also do some lecturing as well as SEO consulting, I help brands with their in-house teams to skill up in SEO and also run a boutique, a small agency in Manchester but we deal with people from all over the world. So that's about the size of it, yeah I've been doing this quite a while.

MC: So I actually checked, so as many of you know and lots of people in the SEO industry, I have kind of my main communication with other SEOs, a lot of time, is through Twitter and I went back through my mentions and see I've been speaking to Dawn now - our first conversations were about six or seven years ago on Twitter, and I did actually notice that I noticed because you're quite well-known kind of on the on the conference circuit and you did say you were going to be a little bit kind of quieter in 2020, and I think and a few people I saw are planning that and we've kind of been forced to anyway with the whole Coronavirus situation. Have you been attending or looking in on any of the online stuff that's been going on, in terms of a lot of these meetups having transitioned online?

DA: Well, I already committed to speaking at SMX London and they converted that to a virtual event? so I spoke at that, I’ve been asked to speak a few others but obviously client work had to come first - not that I'm not saying people don't prioritise clients of course - but some clients were impacted by coronavirus and had to close, all sorts of things, so it's just a bit chaotic, so I had to focus very much on on helping them through this challenging time. So I didn't get a chance to look at as many things online as I'd have liked to do. I mean there's a huge amount of webinars out there. If people in the past maybe didn't get to conferences, it's certainly a time but there's a lot of learnings. I really feel for the event organisers just because ultimately, it's their business that's literally gone to the wall and they're having to be innovative in new ways to generate audiences and keep the audience the price would drift away throughout this time.

MC: Yeah sure I mean our event that obviously we were trying to get you to come on at some point, SearchNorwich, we just decided as it's not a revenue thing, it's just a nonprofit thing, we just put that to bed for now which is cool, because as you say we've got other things that we've got. Again, clients that needed extra time and attention. But yeah, I think you're right and it's super competitive or just crowded now, that online meeting space and it isn't working for some people quite the same, but you know it's a good time maybe if you haven't done a talk before to get involved.

But what I did want to talk to you about and the subject of this podcast I’m really keen to get my teeth into is about information retrieval. So this is an area where if I think about information retrieval, you are, at least from the SEO community, at the top of my list, the tip of my tongue, top in my mind for that and so first, just give us an idiot's definition of what is information retrieval in context to SEO?

DA: Okay so information retrieval is well, it's the bigger thing behind search. Web search is a child of your life of information retrieval, so its web search and everything involved in that. So if you like SEO, paid search and anything, any kind of method of crawling, indexing, ranking is part of information retrieval. So it's the bigger picture and I mean you can get really granular with information retrieval, of course, but a lot of people don't seem to make the connection between SEO, search, and information retrieval. But search is part of information retrieval, it is any way the information can be fetched to meet an informational need.

MC: Well one of the things that really interested me that I saw you said was that obviously you spend time, like many SEOs you've read books, blog posts, webinars, but you said actually you spend less time now doing that than time reading information retrieval books, papers or you know lectures, while you're working. So how can knowing about information retrieval help SEOs with their jobs?

DA: So, a lot of people think that search is a big secret, that SEO is a big secret and it's almost like magic but it’s not. Every single discipline out there has a background in research and SEO and web search is no different. so it's really understanding the foundations of how search engines work, obviously, each of them will have their own - it's like any commercial venture, it's not gonna be exactly as it would be out of the box, you don't suddenly say hey you know I can do PHP so suddenly it's not a big secret, if that makes sense - so there's just it's understanding really the core of search, it's really hard to explain but it's understanding the core of search.

So, for instance, I was reading a book last week called Search Result Diversification and it’s actually by researchers and search engineers and it looks at the challenge of returning diverse results ranked well and it discusses lots of algorithms, which search engines use, presumably including Google. There's a lot of Google engineers and researchers that attend information retrieval conferences and present papers and for instance, Google BERT, was from the information retrieval world. So it literally is search. So I read a lot of books around how search works if you like.

MC: So, I mean we had - it wasn't to long ago was it, probably six months I'm probably going to find that it's totally wrong when I go back and look, but we did actually have last year the Google update that Danny Sullivan, that was actually specifically around domain diversity, because that was a challenge Google was having, especially with sites like Amazon where you've got these really authoritative sites and you do a search, and you get maybe five out of the first six results with different Amazon URLs because that's not particularly helpful is it to the user.

DA: No, absolutely not but result diversification goes a lot further than that. That was almost like a de-clustering of sites from the same domain. I know that if they've revisited that a few times because the likes of Yelp comes back up they're using their internal linking structures and if you like the cross-referencing and a relatedness of various pages together, which kind of boots them. But search result diversification is about how we could meet all the different informational needs that somebody has but we don't really understand what the query means.

MC: Okay, so this is around intent then.

DA: Yeah because it’s really, really hard. I mean, if you follow the likes of Susan de maiz who's a one of the chief researchers at Microsoft and a real giant in the information retrieval in search world, she looks up many years ago what she called the ‘vocabulary problem’ which is that people just don't use the same words to describe the same thing. So you had the challenge of the query, and then you have the challenge of actually understanding the documents as well and it's search result diversification, which was only a 70 page book and written by a research team in Glasgow who are at conferences and doing papers and all sorts, very smart guys, and they looked at the challenge of how can we return the best set combining what they call both precision and recall. Precision is the most accurate result, most relevant. Recall is everything that is related or it is kind of relevant, but is not exact. People like to have a choice, that’s the whole point of search results as well, so that's really what search result diversification is about and those are the kind of books I spend my time reading.

MC: So that's really interesting because that's a process we go through with clients, just in terms of intent. So to give you an example, if they come along and say okay we want to rank our service for this kind of search term, and then when you start looking for that you can see, for instance, the type of results you get back might be say comparison results, rather than individual service providers like themselves. We can start to have those conversations about, well Google's deciding here that the primary intent of that person for search is actually they want to do a comparison, they don't want a specific result. But then with these kind of broader results, one thing I've noticed is, and this is a sort of strictly anecdotal thing I know some people don't agree with, but I've noticed Google tends to push up or have more prominent things like the People Also Asked, when the queries super generic as if it's trying to understand the users query better muc. Like if we had a conversation and you said, tell me about this topic, and I'm like well what part do you want to know about? Because I see this now, just with the standard organic results, as well where we might get a couple of clusters of totally different types of search results, where Google's identified different things. The one I always used to use as an example at SEO training was doing a search for something like the word blender, which as you may know is a piece of software for 3d modeling but is also a product, so Google was like I'm not quite sure what you're looking for, so it will serve both.

DA: Exactly and funnily enough, I have the blender example as a training deck, because it returns if you say blenders, as it returns a shopping result…

MC: That's right, yeah.

DA: And if you search blender, it’ll return the brand with knowledge on the right and I think there’s a film as well.

MC: That's right, blended!

DA: But that's exactly the point, I mean the search result diversification book, the point is that it's a tiny, tiny, tiny part of it with the information. I mean even in the world of SEO as we know, there's local SEO there's technical SEO there's people who do PR and links. and that kind of thing. and then this brand SEO and then there's now there's a AEO is a SEO any industry begins to fragment as people start specialise.

IR is many, many years old and it has many, many fragmentations because so many parts of it are completely different. Image information retrieval is a completely different field, to text information retrieval, obviously you have specialists in that because we know images are not ranked in the same way but they are classified because they're not crawled for text. But that's just an example.

Then you have music information retrieval, then you have library science then you have the whole machine learning and the Google BERTs of this world and so forth, which is taking it all fall through the quantum leap. And so there are many areas to it. The course I watch continuously by professor Hannah Bass from, I think she was at the University of Freiburg actually, but she was at the Max Planck Institute for years, and that has 18 one-hour long lectures each of them with a different topic. Be that about compression algorithms, how search engines compress the index for space and efficiency, obviously, that's a big deal, be about natural language processor, be about machine learning, and I know I'm talking a lot now. But another thing that information retrieval is about is that I see a lot of people in SEO say, oh it's all about the algorithm, there's no such thing as ‘the algorithm’, they are anything that it's basically a mathematical way to provide a solution or order something is an algorithm.

Your average shopping site has an algorithm that drives its results when you enter something, even in word press when you enter a search into the box of a blog that's an algorithm that spits out results.

MC: Albeit a bad one.

DA: The point is, there's no such thing as ‘the algorithm’ - there are absolutely hundreds. if not more, of algorithms out there and the lectures, for instance, show us that. you know there must be six or seven different algorithms for just compression and efficiency. There is one called the Solomon Golomb algorithm, that's about compression and it’s all just maths basically. For me, SEO is combining maths with marketing.

MC: So what we'll do I think dawn is at the end of this podcast is I will get some links to some of these people that you've mentioned and we'll put them in the show notes at search.withcandour.co.uk so if you want to start following the people Dawn’s talked about, I'll make sure I get those links from her.

Dawn, there's something I specifically want to pick on. So we've spoken about information retrieval as this kind of broader topic, of which SEO is a child of, and how it shards into many different areas of SEO in different ways, but there's a couple of specific things I did want to talk to you about because it you had a really interesting tweet I saw a few days ago which, if you don't mind, I'll just read it out to you and then we can discuss it. So you said you met an IRML - so information retrieval machine learning researcher from the Max Planck Institute at ECIR, who develops machine learning algorithms to detect author footprints in just three lines of code. His work was for insurance fraud but apparently these things are widely used. I suspect it will be very easy to see who's who online. And this really kind of piqued my interest, especially because in the field of SEO, we had Google authorship come and go, and we've got lots of people now with this expertise authority trust and getting experts to write authoritative content, that's a lot of topics people speak about now in terms of SEO and it's not always particularly clear, I don't think when people are talking about exactly what they mean. So I'm interested in your thoughts about what you think might be happening in terms of this identifying authors and is there something going on there that you think Google would be working on?

DA: Okay, so first and foremost for me, EAT is very much just an anagram that is used and a simple layperson's way for quality raters to understand how to mark things as a trustworthy result. Now, I've read quite a bit about how the Human in the loop quality rater enters the equation when it comes to producing search results. A test algorithm is run or the results are refined over a period of time and then obviously tested etc. and then they're pushed out to humans because the best measure of whether something is good is by humans, obviously Google has the quality raters and then the results all come back anonymously. I've read this as well this was a post from somebody on Quora who was an engineer from Microsoft, who previously worked at Microsoft in search engine, but Human-in-the-loop is in a lot of books on IR and that's basically the human test if you like. Not just that, Human-in-the-loop could also mean the user of the search engine providing feedback, reporting the results are not grey or whatever - dare I say, clicking on things.

MC: While you're talking about that, I'll just interject and again I'll put a link to this in the talk. So I think you like this one as well Dawn, there are a very - one of my favourite talks about how search works I think it's 2016 SMX Paul Haarh talking about an engineer's point of view and he's quite specific there and how he talks about the difficulties in using click data. so if that's just if you're listening if that's triggered kind of your thoughts about how Google uses clicks, go watch that video and come back.

DA: It’s not about that, it’s just massive data, it’s massive, massive data that all gets combined, then to adjust things broadly and then they're pushed out to samples of humans that go and score things, and then there's this thing called - when all the scores come back in - there's this thing called discounted cumulative gain, normalise discounted cumulative gain, whereby it's an algorithm where all the scores are adjusted slightly, based on these samples, many samples of humans to say, hey well that's about it. So it comes back to maths and that's it. But there is validity in using the word EAT, I'm a great fan of some of the people that, Lily Ray, for instance, is a super-smart girl, she knows her stuff, she's a good friend of mine and I've had some really great conversations with her and she's very, very clever, and I know she's a huge advocate of trust authority she's getting the results, so that's fair enough.

So I think there is validity in it, I think we all have terms that we use that are simple to explain to clients and I think that's why Google a call to EAT, I don't think you'd ever find an IR engineer calling it EAT and I asked Carrie on Twitter when we were having a bit of a joke about it and the only person ever mentioned it is Paul Haarh and that was only when he was topic cons. Some of the things I'm talking about now, I probably never get to talk about to clients because they're too geeky and sometimes the IR speak is too geeky, so it has to be made simple for quality raters, who need a simple way to say how can I tell if somebody's an expert, an author, trustworthy where they go and investigate.

Ok so that's that point, and so on to the other issue about the authorship and the trust etc. so the chap I was talking about when I said I met and IR researcher, an ECIR. So an ECIR is a European Conference for Information Retrieval and we went to dinner and I sat next to a chap, who's again a super smart guy, many years at the Max Planck Institute, and he's just finishing off his PhD - his subject matter is Author Detection for fraud companies, but he was actually speaking at ECIR, and he developed an algorithm which was tiny, tiny, just a few lines of code, using compression algorithms, that is able to look at author footprints and if you want to look him up - just in case you don't believe me; who would think that somebody wouldn't believe somebody in the world of SEO - he's called Oren Halvani and he was speaking ECIR-- and his work for six years now has been on building algorithms that can be used in search, or in technology, to detect fraud. his work is fascinating actually and I'm pretty sure that there'll be that in place with the whole guest posting things. I mean it’s really easy for us as humans to get - today I got an email or yesterday I got an email from somebody that said, hey I'm John blah blah - I'm using an example of a common name - but he had a picture as well, so he clearly had a Gmail account, but no other footprint at all.

Yeah so this doesn't take a rocket scientist and I think there's a lot of people in SEO who have been, a lot of the time, trying to sweep up after themselves, thinking that machine learning systems can't pick up on the fact this is a fake person. Seriously I'm not sure that it's worth the effort, I think they should realise that Google is really, really smart at being able to tell who people are and algorithms can pick up an outlier, just like that. Certainly, the work of somebody like Oren Halvani, if you look there's lots of papers on that. I imagine they'd be able to cluster together all the papers that I'd ever written, all of the blog post I'd ever written, all of the contributors because I'll have a style, you know it's just easy and when somebody asked John on Twitter, that's how the conversation started because he said, it's pretty obvious when something's a guest post - so I'm not sure it's worth the effort for SEO as to be trying to trick Google.

MC: Yeah, that's really interesting because obviously they did, when I say they should say John Mueller from Google, did have that comment, I think it was this week or last week now when they were confirming that every, they were just saying, anytime you do a guest post that link should be no-followed but also he seemed kind of blase about it because the comment he made was we're pretty good at algorithmically detecting when it's a guest post now, which I just thought was quite impressive, goes along with what you're saying I've seen mixed you know results myself but obviously, the web is very big and I'm only looking at a tiny part but I think the point you're rightly making here is that look we're basically training machines to spot patterns and actually machines do a much better job at spotting patterns, especially with lots of data, than humans do, so while we use Human-in-the-loop stuff for almost like fitness testing and real-world testing to get that granular view, if we're looking at masses of data and like you said, it's about spotting outliers when you've got literally billions, very big number of pages to compare, the outliers do become obvious.

DA: Yeah they do. Authoritative authors would have built a cluster that will stand out and then you have all these other randoms all over the place, and I'm trying to implement a lot of the things that I learn in IR, for instance, to visualize clusters and link graphs and I think you know the guest posting author will stand out like a sore thumb in the link graph, and all the place that they've posted will literally be visible if you looked at in terms of a link graph because they send these lists of like, hey I write for this.

MC: Yeah, right.

DA: But when you look at it from above you'll have what for the map of potentially all the place where they’ve posed. So it's it's not difficult I think we need to step back and I mean things like that's all quite bad there things like Zipfian distribution, that's a big thing I've learned a lot about in the world of reading information retrieval books and it literally applies to everything. Zipfian distribution will apply to the most authoritative writers in the world on the topic of SEO, or on any topic.

MC: Can you give us a quick definition of Zipfian distribution, Dawn?

DA: Anything that is ordered literally is like the word the is the most used word in the world. The second most used word in the world is used half as much as there and the third is used third as much as there and that distribution of frequency is ordered to anything in the world, populations of cities, every language in the world follows this Zipfian curve, where it just halves every time, across, every single version of Wikipedia, every language - even languages that are not known not just not translated times yet, popular everything, seriously it's a power law and that power-law massively to SEO.

For instance, it's the reason why PageRank halves every time because in a frequency distribution everything is proportional to one over one. So that is a big thing that has massively helped, understanding that has massively helped me in being able to look at these bigger concepts and understand for instance that some authors will just stand out because of this whole line Zipfian distribution. Then you'll have all the random guest posters, from all over the place, on the very edges of this Zipfian curve.

MC: So are these algorithms going to be smart enough? Because obviously I got married and changed my name dawn from Mark Williams-Cook, so are these algorithms going to be smart enough to know that?

DA: Well, they will over time. You’re an entity, well you're not an entity - you are but you're not, because true entities are in the knowledge graph etcetera, but what you'll find is that, for a while, Mark Cool will come up then Mark Williams-Cook, it's about named entity disambiguation and that is where things like on linked mentions come into play and again Gary Illyes mentioned that - when somebody said, what's the point on linked mention he said, we use it for entity disambiguation. So for a while, they'll work it out but it's why, for instance, they struggle so much with locations of the UK because, for instance, you know Manchester is in Greater Manchester, but then Manchester City is Manchester and then you've got Manchester Manchester, which is a child of Manchester City is a child of Manchester and it's throughout the whole of the UK that and throughout the whole of the world, and people get it wrong.

So you have all these databases where people are guessing wrong, then you've got Wikipedia that's out of date, so then see disambiguation is a different thing but it's again all part of information retrieval.

MC: So this is fascinating Dawn, this is really interesting stuff. We're already at 30 minutes and there's one other subjects I did want to cover with you because I saw you again briefly mentioned it and it really interested me because I had to Google this - it was back in 2010, so ten years ago I saw Duncan Morris from distilled and he did a talk about foraging theory

And optimal foraging theory, in terms, in context to humans trying to find information and I saw you mentioned there's a lot of research still going into search behaviour of humans and how they use scents and patches, and I really just wanted to spend a few minutes talking about foraging theory. So it would be great if maybe you could just explain what foraging theory is to our listeners, how they can go about understanding it and how is it helpful to SEO, and user experience.

DA: Okay so first and foremost, foraging theory is really old. I think one of the first papers was by office called Parolee and Card and it dates back years, I can’t really remember how long but it's a long time, and basically it relates to you the way that humans will continue - we forage for information in the same way that animals will forage for food, and we'll follow a scent and as long as we think there is value to be had and that we're moving further to our end goal. And foraging theory is mentioned a lot in the field of information architecture which is obviously a side topic to information retrieval.

Information architectures are where things are all built together in an information structure. obviously, there's a science with that as well and then you have the information retrieval which is about their fetching, organising, indexing, like filing system and then serving, like a librarian would if you asked for a book. But the point is, foraging theory is still ongoing and again the team from Glasgow, the research I mean into foraging theory is still ongoing because humans don't stop. Humans are constantly changing the way that they search, new technology is coming through all the time, but we still have those same informational needs because humans live off information. People as we know, don't search in the same way ever, as we're all different, each generation obviously changes as well because you've got the lives of people now on a mobile, so they a lot of the time they want to use the actual internal search box on a website rather than actually browsing through. But then other people will actually choose to browse and click from one section to another, so information foraging is about that. Understanding the behaviour, making sure that people realise that there's still value beyond that next step as they seek forage and follow these what they call scent patches, information scent patches, and that's why the likes of click here on a link is a really bad thing. Not only is it for accessibility, but it’s bad for information foraging because you're not telling people what the gain is, what's the value that's behind that link. That's why as well, if you're deceptive in your anchors or your anchor text, and send somebody through a bad door that's bad for information foraging because you basically reduce the value that the user perceives that you have for them, to find the information that they need.

MC: Is part of foraging theory as well to do with friction and resistance in getting that value? So for instance, my search behaviours changed if I want to know the weather now I just, by voice, ask my phone because it's basically less effort than typing something into Google? So is that something you consider when you're thinking about foraging theory and providing that information and value?

DA: Foraging theory is more specific to people rooting around, not really really knowing what they're looking for particularly. But I guess it is all part of the same thing. I mean for me, the likes of mobile have changed everything as we know, and for me now, features that you can offer are information - you just you just use their what's the weather thing and the voice search or ask your phone.

A way of helping with information foraging you know in a voice sense, for instance, could be clarifying questions because voice search has been very poor so obviously people almost stop bothering with it a lot of time, other than to listen to Spotify or to check the weather, which is one of the most popular things, because it was pretty rubbish for quite a while there and still not amazing. And again, another person I know is working in that field specifically of conversational search, Mohammad - he's a great man, I can never pronounce his surname, I think it's Mohammad Aliannejadi, he is a researcher from Switzerland, but now moved to Amsterdam, working in the field of constitutional search. He did a great piece of research that actually found that when the search engine or conversational assistant asks a clarifying question, such as, did you mean blah blah blah, you know we have five of those, which one is it type thing - that actually it makes a massive, massive difference to the value people perceive in the information you're giving them, it helps them meet their need, because you're forming a bridging gap, and I think that's probably my other reason why people also ask is in there, it's to help people on that journey.

MC: It is interesting when you compare information foraging to foraging for food and people not being sure about what they want because it just reminds me of all the times I've got sucked into that circle of behaviour in Wikipedia, where you go to look up one thing and then you end up clicking on link after link and reading something completely different. I guess that's the information equivalent of binge eating, but you just forgot to find something and you get set on a scent trail and then that's me done for an hour and a half, ending up reading about Greek history or something, when I started reading about algorithms.

DA: Yeah exactly and that's why the likes of contextual internal linking are very, very important as long as it's useful and helpful because you want people to be able to continue that. Some people look browsing, I get the impression that you're like that, as you just said, you like to go off and follow a path of information, like me, I like to just keep going down rabbit holes because I find that those are where all the best things are. Yeah, so it's very much akin to like animals, it's an interesting topic, it's very much part of UX as well now. So all of these disciplines are very combined.

MC: So to round this podcast off, have some fun, speculate with me dawn. we won't hold you to any of this I promise. have you got any thoughts around how SEO, and not necessarily just in terms of IR and machine learning, but welcome those topics - what do you think we're going to see changing over the next five years or so?

DA: I think search is just going to become increasingly about features that you can provide to help users. So obviously words still matter, but for instance, on you know as you said the weather being able to get the weather. On e-commerce sites, for instance, you know the other day I think Marie had an interview with John Mueller and he was talking about all the all those pointless words you see on in commerce websites, on that category page, there are clearly made for search engines. I asked that question of him a few years ago, on webmaster hangout, and I think will move more and more away from those things into a fewer words type of scenario but features matter so feature weight, so lots of great pictures on that fashion website. Is there a runway video? Is there are 360 degree image? Is there a facility to be able to try on that outfit, virtually? Is there just all these really interesting features? Is there a section that answers questions that all those people have asked?

So, ladling features is the way forward I think. But obviously that's just a very small part. I mean for me, search will become increasingly, just an assistive part of our life. The challenge is, sometimes I think a lot of people will start to devalue SEO and not realise that SEO is as much, if not more, of a necessity as those features become part of an offering because they all need optimizing, they’ll still be fuzzy edges that search engines won't be able to work out. There's still continuous change and it's not enough just to be a marketer, and to just guess because SEO is a finely tuned balancing act of watching for change, understanding why or how that change might have happened, was it something on the side technically, was it because of it query intention, was it because of just a general change in user information needs on the web boring in the territory as a whole? So, I think it's important that people realise SEO still needs to be very much at the heart of all of this, otherwise, it is literally and I said it wasn't gonna use these words, it is literally just guess SEO.

MC: I knew it would get in there. Dawn, that's a brilliant way I think to wrap everything up, thank you so much for your time. I've really appreciated it, really appreciate your insights on IRML, we'll get those links from you to put in the show notes.

DA: You’re very welcome. Thank you.

MC: Really appreciate you joining us. Wo we're gonna be back on Monday the 29th of June, as we always are on Monday mornings. To get the show notes all the links to everything we've just spoken about and Dawn's website, you can go to search.withcandour.co.uk. As usual, if you've got any feedback do drop us a line on social or email, if you have questions to do Q&A session sometimes on this podcast we’d love to hear from you and of course, whatever platform you're on if you do enjoy the podcast, please do subscribe and I hope to see you next week.

More from the blog