By Tom Anthony
Right now, we are nearing a point whereby the convergence of several related technologies, combined with their improving accessibility (infrastructure and cost) means we are not far away from some big disruptions in local search. People will be expecting search results far more specific to their current context than ever before…. and they’ll be getting them. I’ve put together a simple and relatively typical story to illustrate some of the technologies (see section after the story).
A Search Story
Imagine someone who needs to pick up a gift for a friend of hers; she is wandering through London and searches for ‘jewellery shop’ via her phone as she walks. She gets a bunch of results for stores nearby, but isn’t happy and so refines her original voice search by simply speaking ‘show those with 4 stars or more’, and gets a subset of her original results a moment later. She is still unsure, so jumps on a rental bike and heads towards Oxford Street. Her phone recognises she is now cycling and updates the results for a wider radius.
She checks her results on her watch at some traffic lights, and decides the top results look great so clicks that. A short while after starting peddling again she feels her watch vibrate on the left side, and turns left as reaches the next intersection. She follows the haptic feedback from her watch for a couple more turns before parking up her bike near Regent’s Street when it indicates she has reached her destination.
Macys are already deploying iBeacons in their stores.
She walks into the store and knows she is looking for bracelets but isn’t sure where they might be. She pulls up Indoor Streetview on her phone and gets an instant map of the store, and sees she needs to head upstairs and to the back of the store. As she goes up the elevator she sees an ad for the store’s own app, so she grabs that over in their free in-store wifi and opens that up to see what offers there might be on.
As she heads out onto the floor she is now too deep into the building and has lost her GPS signal, but by now the store’s app has opened up and uses beacon technology in the store to guide her to the bracelets with perfect accuracy. She browses a bit and really likes a couple of the bracelets she sees, but can’t decide between them and decides to mull it over.
On the way to catch the train home, her phone buzzes to let know the electronics store she is nearing has the watch she was looking to buy as a gift for her boyfriend. She’d searched for the watch several times over the last few days and so her phone setup a passive search.
Later on that evening, the store’s app (knowing that she’d been in the store) throws up a voucher code for her to get a discount on their website. She decides to go ahead and take another look, so opens up the site and eventually makes up her mind and buys a bracelet using her voucher.
The Future is Now
All the technologies in this story already exist, and almost all are already available to customers (you’ll need to wait until February to get your Apple Watch with haptic feedback) and nothing in this story should be particularly surprising. The most important aspect will come from two things:
- All the technologies involved reaching widespread coverage.
- Consumer’s familiarity with these technologies and expectations from them.
Once the technologies are widespread and people have acclimatised, there is a lot of synergy between the various elements and I believe we’ll see a sharp uptick in them dramatically affecting searcher behaviour (which will be cyclical in affecting how businesses deploy these technologies).
I’ve discussed previously how we’ve noticed a trend for people who search on a mobile phone to have an expectation that Google/Siri/whatever will use not only their explicit search phrase to give them relevant results, but will also use supplement their query with implicit aspects based on their context (see this post for more discussion on that). A simple example to illustrate this is people searching for a phrase such as ‘breakfast’ as the sole phrase. Not that long ago such a search would’ve been crazy but now we know that Google will understand we are on a phone and what we want from our context (see this video for an extension of this).
There is no reason to believe this trend won’t continue and people won’t rely on further aspects of context and other technologies that will augment their searches. There is also no reason to suspect that the proliferation of the technologies in this story isn’t going to continue in the same fashion it has been. With that in mind, let’s take a look at what happened.
Breakdown of the Technologies
So what happened during this search, what technologies where involved? Let’s break it down.
- Search for ‘jewellery shop’ without any intent words (‘buy’, ‘find’, ‘nearest’), or location (‘cambridge’), or qualifiers (‘luxury’, ‘cheap’). The searcher expected that both intent and location would be implicitly understood from her context.
- Search refinement, a second search (‘show those with 4 stars or more’) which based on the first, rather than being a completely new search. Google calls this ‘conversational search‘, and with ongoing improvements in Machine Learning and in the data they have access to, it seems sure we’re going to see it get bigger and bigger (especially as wearable devices take off and users acclimatise to the concept). Likewise the Machine Learning powered Google Hummingbird update is going to drive improvements to this further, and so we’re also going to see this become a lot more powerful.
- Mode of transport as context. Android already has an activity recognition API which will recognise whether the user is on foot, in a car or on a bicycle. When we are doing a local search, it makes far more sense to consider …read more