Measuring the impact of Beta
August 16, 2017
In digital transformation more often than not we iterate a service; however at certain times we have to launch something new. Normally this is when software changes (new CMS or major changes in coding practices) or when the service hasn’t been done before.
Following GDS methodology a new service progresses through a digital lifecycle (discovery, alpha, beta, live) and as analysts our challenge is measuring the impact throughout that lifecycle.
NB measuring the impact through iteration should be a pretty easy to demonstrate if you have an established measurement plan in place. A simple addition/ iteration of a feature should reflect positively/ negatively on a KPI already in place.
The Lifecycle Begins
Discovery and Alpha phases are naturally where conversations begin on what to track; what to measure. Being able to track is a different aspect as more often than not we don’t have anything built yet; but you can start to measure user research findings (e.g. task completion rate, satisfaction…).
Tracking and measuring Beta is normally where the monitoring begins. Working with stakeholders you can understand the user needs and thus what to measure, working with developers you can deploy new/ additional tracking, leaving you with the knowledge of what to measure in terms of a goal and access to the right data respectively.
So you have your goal and you have data; but what KPI/ metric should you choose to measure impact? Unlike the aforementioned iteration, which has the KPI/ metrics already defined, having to genuinely measure impact of something new is very hard.
Before I run through what you should or should not report on, I’d thought I would expand on some common challenges you might also run into.
Stakeholder demands - Can you make a comparison?
Stakeholders logically will (and should) be demanding to see a positive impact of a Beta site. Naturally to do this they will try to make a comparison to the existing site or a rival live site. For analysts this is one to try avoid at all costs because the simple answer is you cannot compare.
Reasons will be clear below, but whilst technically you can report, the reality is you are comparing two wildly different sites. Two sites built differently, two sites that could well be aimed at different audiences, two sites that might have different goals to address. In a nut shell you have nothing comparable to report on that isn’t biased in favour of one of the two digital offerings.
Time frames - Aren't they too short?
In a Beta phase, it is all about learning and importantly validating the ideas and offerings you have created.
A Beta phase can last from a few weeks to a few months, maybe even a year. But can you monitor anything worthwhile given the short timescales? Well it depends on what and why you are measuring in the first place. In a Beta phase, it is all about learning and importantly validating the ideas and offerings you have created. Learning and validating is why you should measure, regardless of length of the Beta phase. For example focus on task completion tracking or engagement tracking to validate a feature rather than recording growth metrics of a beta site in general.
What you should avoid reporting on
So depending how you launch a Beta site you will have significantly less people visit your Beta website than the current website. You might have targeted a specific audience who know about the site (for example those on a mailing list or those that visit a certain referral page within a site). This means footfall is down. If this were similar to an a/b test where traffic is equally split between the live site and the beta site, only then would aggregated session based metrics become relevant.
Pageviews are triggered by hit based activities. As well as the initial page load, other hits triggering a pageview include clicks on a carousel, in page sign up, email subscription, likes/ shares and many more. So depending on the features on a page, pageviews can differ despite this being one page. Often a BETA site is different than the live site, it will have less, more or different features and thus widely fluctuating pageviews (per session or per user). Throw into the mix the purpose of the user journey (live website to buy, investigate, share…; Beta website to nosey, compare, see what's new…) and you now add differing traffic volume (number of pages viewed) into the equation. In general you cannot compare page views; either in total or per page as the pages in question are not a valid comparison.
Bounce rate is not a KPI. Please don’t do that. It has its place, but bounce, like time online is very subjective. A user could get what they want on a page and leave, a user might not get what they want and leave. We have no idea if they were happy or not. We can also easily bias bounce rate based on adding hit based features mentioned earlier. A site with more features, means more “events” are tracked and less bounces are recorded. Sites with differing features will have different bounce rates. So don’t use this metric as a comparison and also take caution when monitoring it as new features are added.
NB You can omit certain events from affecting the bounce rate (See Non-Interaction Events). This is good when you want to record events but feel it shouldn’t impact on bounce rates.
Similar to bounce rate, is a short visit better or worse than a long visit? We really don’t know as it depends on what the user is trying to do. For example are they browsing product pages or are they just after a telephone number? Length in this case will differ significantly so try avoid this as an aggregated metric.
What you can report on...
Instead of top line aggregated metrics such as page views, sessions or users, lets monitor metrics about loyalty.
I would suggest visitor loyalty metrics and trying to grow these during Beta and beyond. Not necessarily using the returning visitors metric (returning once isn’t necessarily a loyal user) but ones who engage, ones who keep returning multiple times (in certain time period), ones who advocate and ones who convert (what ever we determine convert as). Simple define your “loyal” or “engaged” user, create a segment and then report on that. (Be that a % – loyal to non loyal or a segmented view of other metrics).
Instead of time online, let look at time to complete a tasks.
Lets get specific and focus on tasks users will do on your website. This could be simple as a subscription sign up, to more complex things such as account sign up, purchase or form submission. Measure the time to complete these tasks/ goals instead an direct efforts to reduce them.
Instead of page views per visit or bounce rates, lets look at completion rates for key tasks.
Now we know the time to complete a task, lets back this up with the completion rate per said task. Now we have real engagement detail. We will now know did the user compete the task, how long it took them to compete the task and variables in-between (such as where they stopped in the task or where they spent the longest time). We can also use this metric to derive a “cost per transaction” showing stakeholders a much loved monetary figure demonstrating service value, more of which…
Instead of total goals/ sales, lets look at value
In revenue terms, yes we need to know total sales, but we can glean more information by knowing the economic value per user (granted this might be hard to do if you not sell anything on the web). In Google Analytics you can measure value per page, per events or per user/ session. Very quickly you will see who is adding the most value. This could be a segment such as search traffic or returning users. It could be people who visited certain landing pages. You can then target these “value” users and take steps to increase the value they spend (or potentially spend time growing that segment.)
You can measure the impact of a Beta site. Avoid the “comparison” metrics and focus on the micro moments that validate a user need and thus add value. Once your site moves from Beta to live these metrics will often continue giving you a much needed baseline to record against.