Enable javascript in your browser for better experience. Need to know to enable it? Go here.
Blogs Banner

A Glossary of Beacon Interaction Design

Let's slice up beacon interactions

Okay, so we know what beacons are and what they do - great. The next question is, what can we do with them? What kinds of interactions can we craft?

That's the question my collaborator Nick Urban and I decided to pick apart. We sliced beacon interactions up, to show in a clear and non-technical way what they are really made of. In this article we will walk through the slices.

Let's slice this up    

We think this makes it easier to evaluate the technology in terms of it's possibilities, rather than anchoring our thinking around the ubiquitous retail/coupon scenarios.

We hope this will be useful for UX or product people, technical designers and developers, hardware entrepreneurs with crazy ideas - basically anyone who wants to think through or generate proximity-based interaction scenarios.

How shall we slice this up?

We're going to step back and look at beacon platforms from a bird's eye point-of-view, and ask what are the parts we can pick and choose from to make up a beacon interaction. We'll cover four slices:

  • Slice #1: Device Communication
    How will our beacon devices communicate with each other?
  • Slice #2: Representation
    Who (or what) will each beacon device represent?
  • Slice #3: User Context
    What will be our user's mental context at the moment each interaction occurs?
  • Slice #4: Proximity Response
    What type of change in proximity will trigger each response?

Because these are smart devices, any beacon interaction can be sliced apart in these four ways. But before we zoom out to that bird's eye view we need to cover a little grounding, so we have a shared understanding of the components in play. We need a slice zero.

Slice Zero: Clarifying the components

It's been said many times before: 'beacon' is on overloaded term. Let's unload it into the different contexts in which it is used:


A purpose-built little box with bluetooth capability, sitting there broadcasting the fact of it's existence every few milliseconds. Plain old beacons lead a lonely life, never really doing anything other than continually firing out these little chirps.

However, this allows them to be discovered by beacon detectors, which can use the signals to continuously figure out the distance they are from each beacon.

Beacon detector

Usually a smartphone, tablet, or mini-computer (such as a Raspberry Pi) - any device that can scan for beacons and detect how far they are away.

Detectors are actually very similar to the plain old beacons above - just a physical device with bluetooth capability. The difference is that detectors are bluetooth attached to a smart device - a device with enough horsepower to respond in some meaningful way.

Any beacon detector can be programmed to broadcast too (called a virtual beacon), if you choose to program it that way.

Beacon SDK

To a software developer, SDK means Software Development Kit. It just means the code libraries developers use in their apps in order to scan, get proximity values, or broadcast as a virtual beacon.

Developers can mix and match the power of different SDKs to create amazing apps. For example, you can imagine using a proximity detection from a beacon SDK to trigger some location-specific experience using an Augmented Reality SDK.

Beacon platform

All of the above together, plus usually a bit more. This is for when you are done experimenting and want to start building something to go into production at scale.

Vendors have emerged with competing platforms. They usually offer physical beacons in bulk, content & media management services, systems for configuring and deploying beacons, and a vendor-specific SDK.

To give an example, Qualcomm's Gimbal platform comes with a range of different types of beacons for different physical environments. The SDK comes with features such as geofencing, analytics, push notifications, and end-user customizable privacy control.

That's it for terminology. Now we've clarified the components, we are ready to dive in with our first interaction slice!

Slice #1: Device Communication

In any beacon interaction, one device must be acting as broadcaster and the other as scanner. But since some devices can broadcast and detect simultaneously, we have three possible modes of device communication:

  • This is usually a physical beacon device, such as Estimote, or others
  • However, a smartphone, tablet or computer app could be programmed to broadcast-only as discussed above
  • Imagine a beacon next to a museum exhibit, or a smartphone or wearable device broadcasting (secure) medical information for ambulance crews when the owner is unconscious
  • Usually thought of as a smartphone or tablet, but it can be any computing device with bluetooth
  • Imagine a smartphone scanning the museum exhibit above, or a mini-computer built into the ambulance which alerts the crew to the medical data
Broadcaster / Scanner
  • A device which is doing both simultaneously
  • Imagine a smartphone which is both scanning for special offers in a retail store, and simultaneously broadcasting which offers the user has already collected so that sales clerks are aware

Usually the only thing scanners know of each broadcaster is the distance and a unique identifier. But notice in the examples above we are talking about 'broadcasting (secure) medical data' - what does that mean?

Because scanners are smart devices, simple proximity can act as a trigger for all the rich things smart devices already do: network communication, video, games, GPS etc.

So in the example, the patient's broadcaster will alert the ambulance's scanner to the existence of medical data. The scanner then establishes a network connection, perhaps bluetooth or WiFi, and downloads the secure data.

Slice #2: Representation

Broadcasting and detecting doesn't hold much value in itself. Value is attained when either side of the communication represents something.

People typically think of a phone user detecting fixed-location beacons, which is one arrangement. But there's no reason we can't flip that - a user can wear a broadcaster and a fixed-location mini-computer can detect them.

There are all kinds of possibilities. So what can beacon devices at either end of a scan/broadcast communication represent?

Mobile or stationary (worn, kept on person, left unattended)
  • A smartphone
  • A smartwatch
  • Smart glasses or eyewear
  • Any other device which the person wears
Mobile or stationary (put down, parked)
  • A basketball
  • A car
  • Stock in a warehouse
  • A blood-glucose meter
Area (small-medium)
  • The shoe section of a store, or a specific retail display
  • A meeting room
  • A restaurant table
  • A small park
Area (medium-large)
Immobile (swarm)
  • A shopping center
  • An office block or building
  • A sports arena or park
  • Each beacon in a swarm in turn represents a smaller area, or a thing

Beacons can be deployed individually or in groups. Imagine a busy, high-volume cafe with a beacon under each table. In this scenario, each beacon would represent a table, and collectively the swarm would represent the entire seating area.

This would allow customers to sit down and order from their phones without waiting. When they are ready to order, customers are told to place their phone down on the table. The phone app detects the nearest beacon (under the table), and sends the order to the cafe's ordering system.

Slice #3: User Context

So far we've been thinking mostly about the role beacon devices play in our interactions. But what is our user doing?

Did they initiate an action there and then, by deliberately holding their phone to a 'touch-to-pay'? Do they expect an interaction they signed up for previously?

In other words, what is our user's context at the moment of interaction?

I intend an outcome and I initiate it
  • Touch to pay, touch to login, touch to order
  • A tour of a museum in which you are carrying your phone from exhibit to exhibit and an audio stream updates with contextual information
I intend an outcome and I do not initiate it
  • Subscriptions to notifications of subject interest, i.e. when browsing an exhibition
  • Notifications to family members when you take out the car
  • Pay on exit
I do not intend an outcome, and I am unaware of the interaction
  • Analytics
I do not intend an outcome and I am aware of the interaction
  • Gamification / rewards
  • Context-aware coupon delivery
  • Notifications of recommended items, i.e. when shopping
  • Customer stealing, free-riding

Remember that when we are talking about intention and awareness here, we are talking about them in the moment of a specific interaction. Generally speaking, users should have expressed their broad consent to allow beacon interactions - using your app's privacy configuration options among other things.

Imagine a user opts-in to allowing your app to collect proximity information. Broadly speaking they know about the data exchange, but they will still be surprised at the specific moment they are offered 20% off for visiting a particular aisle.

The same can be said for the passive context of analytics. Users aren't aware of each individual interaction, but it is important they are aware in the broad sense and willingly opt-in to these services.

Slice #4: Proximity Response

As well as discovering local beacons, scanners are able to read a proximity value. This number tells the rough distance from the broadcaster, and is updated every few milliseconds as movement occurs.

The question arises - what kind of changes in proximity shall our scanners respond to?

Let's not forget that although the user could be carrying the scanner, it could just as easily be flipped. Since the broadcaster or detector could be on either side of any interaction, I'll use the generic term beacon device in the explanations below:

  • Imagine a user touching a phone to pay, or touching a phone to an Apple TV to log in their account
  • Inherently by it's nature, touch is associated with the imperative user context
Boundary change
  • Imagine a user moving from an artificial far range over a line into near range, perhaps of a particular sales display in a retail store
  • This could signify a shift from marginal interest into more specialized interest
  • However what if they cross back out of the boundary quickly afterwards? Boundary changes must always be interpreted by software
Proximity gradation
  • Like boundary changes, but without artificial 'lines to cross'
  • Imagine the same retail store example, but instead a measure is taken with each 'chirp', of the user's increasing or declining interest in the subject
  • This interaction can be tricky as signals are not always smooth
  • If a scanner can read proximity gradations from multiple fixed-location broadcasters, then it should be able to calculate it's exact position
  • Imagine an indoor equivalent of GPS
  • However, this is susceptible to the same problems of signal noise, and is at an immature stage for the moment

Pulling it all together

What I like about this way of slicing things up is it gives you new ways of exploring interactions. For example, you can slice up a proposed interaction that you are considering, and ask what happens if you modify one of the slices.

You can also use it as a starting point to build your own interactions, by picking items from each slice and thinking about applications that could work based on a given combination.

For example, how about an expected boundary change between a wearable and a moving thing? That could be a hotel-door activation system, or a ski lodge that rewards 15 runs on a slope with a free lunch.

An addition to an existing landscape

In many cases, beacon interactions are proposed which could be fully or partially implemented in existing technologies. But in those cases, questions should be asked about cost, efficacy, and simplicity - in some cases it may make sense to have a technology solution built around the central point of a beacon platform.

Beacons represent an addition to a palette of options for crafting interactions. However, the addition they provide is not the ability to do something new. Instead they offer new contextual trigger points for things that devices already do.

Hopefully this article helps clarify our thinking about how and when those trigger points might occur.

Read Part One: What is iBeacon?

Disclaimer: The statements and opinions expressed in this article are those of the author(s) and do not necessarily reflect the positions of Thoughtworks.

Keep up to date with our latest insights