"Dear Avinash": Be Awesome At Comparing KPI Trends Over Time

PetalsI get a lot of emails with questions, atleast 10 to 15 each day. Some are easy, others hard, and some mind boggling (due to their length, complexity or audacity!).

"Dear Avinash" is an occasional series where I share some of my answers that might benefit the greater ecosystem. I'll only share the questions that might be universal, and ones where the source would be impossible to identify (to preserve confidentiality).

The first one covered career advice for Stressed "Agency Analysts" and Search Robots messing up your life.

In this post we'll cover a topic near and dear to all our hearts, comparing trends of our Key Performance Indicators and two specific strategies you can use to drive action. One that focuses on presenting data, the second an approaching to analyzing trends.

Let's go. . . the question first. . . . .

We have an ongoing debate and I was wondering if you can shed some light on the issue. We make some reports that track month over month change and others with year over year change.

One argument is that if there is a holiday in a given month or new products come out in that month then month over month will give a distorted picture. (For many companies the same product types usually come out in the same time period each year.) So year over year is what we should look at

On the other hand there should (and usually is) always an increase in year over year. And year over year is less actionable. If I want to know what product/page types need more attention, last year’s types are long gone.

I have tried to find some metric that mixes Y/Y and M/M, but to no avail.

Do you have any thoughts on this matter?

I have often advised that the cheapest way to give context to your performance is to use comparisons to other time periods.

Here's me comparing performance of this blog over two years for the Visitors metric:

visitors trend yoy comparison

Overall happiness reigns, I think. [The blue line is '07 and red '08.]

And that in some sense is the catch.

The best that a comparison of historical trends can do is give you some initial context (yea or nay) and the next best thing is that it is a good way to raise initial set of questions. "Hmmm what happened over there?" "Why don't the peaks line up?" [Digg effect]

But, as is pointed out in the email question, it is the context around changes that makes things more valuable. The graph itself won't answer the questions "why is that up or down" or, perhaps more importantly, "is our current performance better than last year's".

Product sales for you might peak each Thanksgiving (in the US). But if Thanksgiving this year was $15 mil revenue and last year was $10 mil then is that good? More importantly, is that good enough?

This is where tribal knowledge comes into play. What is different about this year and last (or this month or last)? Have you doubled the team? You have free shipping this year? This year you spend a lot on AdWords? Or you just hired me to do consulting for you this year at $10 per hour? Etc etc.

Because of that it would be nearly impossible to come up with a perfect historical trend that will be "clean". So the first thing is to realize that and then not expect too much :), except that comparing trends is a good thing and that its purpose is to just raise questions.

Next I have two suggestions:

# 1: Collect the tribal knowledge and annotate the graph.

Rather than the graph above, I'll present this one (in this case to myself! :) . . . .

visitors trend yoy comparison annotated

When I send out the above graph or present it in the meeting everyone will say "ah ha, we can discount that peak, oh we did not do as well at that point, and we need to do more of this thing over here". I.E. important actionable conversation.

So talk to your Marketers, Boss, Cleaning Lady, the dudette you replaced at half her salary! Get the tribal knowledge, paste it in.

One of these days my hope is that Web Analytics vendors will A] Make it easier for us to add the annotations and/or B] Mine other sources and automatically add context / tribal knowledge as Google Trends does today.

google trends omniture webtrends

[And one of these days the term "avinash kaushik" will have enough search volume to show up on the top graph! Miraculously it does show up in the 2nd graph above – though it is quite likely that my friends at google are just drawing a "pity line" for me! :)]

But if you are presenting in excel (or powerpoint) then consider annotating your data as you go along. That will be fantastic at providing some immediate answers.

# 2: Segmentation to the rescue!

In aggregate trends can hide insights and hence "dirty" the data. If you want to compare "clean" trends then your best option is to compare different segments within your data. [In all scenarios segmentation rules!]

For example you could just look at Organic traffic trends. Or performance of email campaigns. Or everyone who comes to your site from Florida. Or number of people who see more than five pages. Or % of Direct (free!) traffic. Or…. You catch my drift.

The benefit of comparing segmented trends is that you are able to go from trying to figure out which of the 1,800 variables is causing a impact to having to investigate just a couple of variables. This means you'll understand cause and effect (what you did and what was the outcome) much faster.

Here is an example. This graph shows, for the same time period as above, the Visitors from Organic Search for the keyword "avinash kaushik". . . .

avinash kaushik search trend

Now the cause and effect can be understood much faster and actionable insights delivered intelligently.

Let's say I did lots of things to drive SEO from Nov 2007 to May 2008. Well clearly it worked, I wrote a lot less content yet my traffic increased very nicely (even better YOY).

So I could summarize that paying Matt Cutts $1.6 million to help me with search engine optimization of my blog's URL stems was a genius investment! [QQ for Matt: Why is my Page Rank still 4? Tears. :]

Do the same for your business. No no no, not hire Matt, segment your trends when you compare them.

One of my other favorites was segmenting out Direct Traffic. That is so very cool because Direct traffic (non campaign, non search, etc) was usually less influenced by other things (acquisition related thing you do – places you spend money) and hence served as a great barometer for over health of the site.

occams razor direct traffic trends

Are you getting better at getting free traffic? Do people remember your site and just show up? Do you have enough engaging content (in case of pure content non-ecommerce sites) that people return again and again each month? Etc etc.

To me that is a "clean" segment / trend to look at. For your websites there will be others. Dig and poke.

Ok its your turn now.

What are your tips when it comes to comparing trends? What things fail spectacularly? What works really well? Do you dread these things or love 'em? Care to share your war stories? What is right about the approach above? What's wrong? We would love to have your thoughts.


Couple other related posts you might find interesting:


  1. 1

    Great Post Avinash

    I think the premise of your reader's question is very valid. How does one really know what their efforts influenced vs. seasonal trends. And in most circumstances this comes down to how do we find a control. -A concept I've seen very few online marketeers have thought through. Does the M/M or Y/Y trend rise because of that great campaign, or because your closest competitor just spend 10x buying lots of TV ads?

    In general I've seen people use competitive data sets as a control to normalize their data. Some clients have used regional test and control to run in market experiments. Another approach we recently used was look at to look at month over month data and use yearly data to normalize trends. All have their pros and cons but the fact that this question is being brought up shows web analytics is starting to grow up into business intelligence.

  2. 2

    Everyone that I have encountered uses year on year comparisons. I have used it a lot myself, it is simple to get the data for and managers like it. However I find it of little use in delivering insights. Often the % change against last year stays a consistent number or, as was mentioned in the email, the picture is distorted by product releases, holidays or marketing.

    What I believe should be happening (if people have enough time in the day) is to forecast website traffic and indeed any metrics based on the current situation, the trends from previous years and adjusting for marketing, product releases, known seasonality, etc (this was one of my main tasks in a former position). This forecast is then the best possible comparison to use in understanding the performance of your website in that it takes all known factors into account.

    Any difference can be due to poor forecasting but it allows you to drill into the reasons why traffic was different to the expected level. Segmentation really helps here as has described by being able to understand which traffic source led to any difference. Another benefit of forecasting is that it really focuses your attention on what can impact traffic. By the end, I was comparing the number of Mondays or weekends in each month as a traffic impact.

    Basically a good forecasting model would mean being able to say to management that while the number of visits for the month was down 2.5% against the same month last year, this was actually a stronger performance than forecast (+1.2%). While spend on ppc was down 5.0%, the cost per visit dropped 6.2% meaning an overall increase in visits from this source of 1.3%. This partially offset the expected seasonal decline in visits from direct sources due to this month containing 8 days of school holidays this year while there were only 5 days of school holidays during this month last year.

  3. 3

    Apologies if this is a little bit of a tangent, but this is exactly why testing can be so powerful. It's always hard to say how much a change in traffic, sales, etc. was a result of your action and how much was a result of outside factors. If you suspect an influencing factor, testing that factor simultaneously against a control (or multiple variations) can really help remove those contaminating influences.

    On the other hand, I think your advice is great, especially for the inevitable after-the-fact analyses. It's also a great case for tracking major changes to your websites and being aware of outside influences and when they happen. Too many people leave that to guesswork.

  4. 4

    Couldn't agree more, re the need to label past events and the need to segment. Without segmenting, it's impossible to even know whether level performance is due to nothing changing or due to one segment ramping up sharply while another does a nosedive. Any attempt at improvement would be ignorant and hopeless!

    I'll try to dig out some war stories later. Thing is, once we "got it" about segmenting the analytics, it became second nature, so the accidents we'd made are in the distant past.

  5. 5

    Very interesting. I do like to think of this as having different layers. I mainly analyze the PPC for my company, but the same principles can apply to other items. The first layer is a year to year analysis. Or should I say this month last year to this month this year.

    Generally speaking, certain metrics should go up from year to year–conversions and revenue being the major things I look at. Starting at this point you can see if the numbers have gone up or gone down.

    If you have a number of years of comparison you can see if your % increase is more or less than in the past and you can move farther to compare the month to month increase or decrease the previous year to the month to month increase or decrease that current year.

    Anyway, once you have that info you can move down into your data–looking externally for things that may effect you and internally for the same. For me this involves a lot of looking at individual campaigns and seeing how they've changed.

  6. 6
    Rahul Deshmukh says

    Avinash – On your first question, a useful way to do YoY compares is alignment for dates. For example, Oct 1 2008 is a Wed and should be compared to Oct 3 2007 which was also a Wed. This will take care of the day of week effect and make the compares more actionable as you are teasing out one variable.

    – Rahul

  7. 7
    Ned Kumar says

    Avinash- interesting post on a prevalent scenario.

    My personal opinion is that the y-o-y (year-over-year) and m-o-m (month-over-month) is not built to conclude causality. If that is the goal (to see if your action had any effect on the bottom line), then as you point out one should do a segmented drill-down with pre/post, target vs control etc.

    IMHO, y-o-y and m-o-m type charts/data should really be used for either a) benchmarking and/or b)flagging or highlighting. One, this is a great way to get some directional trends and have a benchmark understanding of how you performed versus i) last month, ii) last year, iii) business plan etc. Second, anytime there is an inconsistency in the yoy or mom trends (up last year/month, down this year/month or vice-versa), we can flag these points (or annotate to steal your explanation) which can then be drilled-down further for a better understanding.

    And lastly, an alternative to a straight month-over-month type trending is to use a moving-average trend (averaged over 'X' number of months depending on your business) that will reduce the seasonal effects.

  8. 8

    Nick: Wonderful suggestion! I should have thought of it (what with me be a proponent of Web Analytics 2.0 and all that!).

    Your stress on tribal knowledge not just about your own company but also from your competitors is a very appropriate one as well. Thanks.

    Peter: I grew up in the traditional BI world and forecasting (and planning) was a key part of an Analyst's job but unfortunately on the web we are pretty bad at it.

    Partly it is our fault, we simply have not had the discipline. Partly it is that doing forecasting on the web is closer to voodoo than a science (too many variables, too many sources of data, too many holes). Time will hopefully change that.

    Nonetheless your point is an excellent one.

    Dr. Pete: I don't think there is a bigger fan of testing than me. :)

    Ellen: Thanks so much for sharing a practical example of doing this kind of trend comparison!

    Rahul: Good point, if one is comparing daily trends then it is mandatory to line of the days of the week, else it looks silly (as it does in some web analytics applications).

    But for month over month or year over year comparisons some of it might be a wash simply because of the "rome was not built in one day" angle. :)

    Ned: Great suggestions. I think the idea of indexing against a benchmark or goal is an excellent one. I think Peter had also suggested it. My thought to him was that we, online "folks". have not yet developed the discipline related to forecasting and testing. Partly because the web is complex, partly because we just have not done it much.

    I am torn on moving averages. They do make sense in some cases but in many others the point of the analysis is to identify the trends and, as you say in the first part, the "diffs" and moving averages can mask that.


  9. 9

    Working in retail banking in a developping market (Brazil), comparing trends over time can be difficult due to the strong growth in internet penetration (and as the skill/comfort levels improve).

    Sure, perhaps we grew 40% which sounds great, but was that down to our efforts or market trends? The big issue is determining what was a cause of the environment, what is down to you.

    To make comparisions, I've found that looking on a per customer basis makes life easier. E.g. customers and transaction numbers might grow, but when you look carefully, the number of transactions/customer might drop from 15 to 12 (despite the large increase in transactions due to the growth in customers).

    PS: Avinash – I'll hire you at $10/hour!!

    "Or you just hired me to do consulting for you this year at $10 per hour? Etc etc."

  10. 10

    Hi Avinash:

    In my own experience the tribal knowledge as well as % variations on current period against previous period (Ellen's comment) work fine.

    I can suggest one more method widely used into econometrics: in order to smoothing seasonal distortions is pretty common calculate 3 previous months average, witch allow us to compare with previous year smoothed trend.

    It is the same principle behind the weekly or monthly graphs feature into Google Analytics, by adding a more wide time period data you eliminate peaks and distortions, making easiest the comparisons and clarifying trends.

    I hope my explanation was clear in off.

  11. 11

    Seb, how do you track on a per customer basis? That is an interesting idea to track customers, or perhaps a sampling of customers that stays the same for every analysis?

    Justo so you take a 3 month average as a basis for comparison? Why 3 months? Is that taking the quarter numbers and comparing?

  12. 12

    I have found it's imperative to dig in and segment out channels, products, or customers to really see whether an upward trend is as good as it sounds at a high level. Another tip is to compare two KPI's against one another. Say your revenue is up 15%- that is normally good, unless your site traffic is up 40%. Context, context, context!

  13. 13

    I to think your ideas very interesting, never mint. Congratulations your post. Thank you!

  14. 14

    Ellen (11):

    Yes, we can take 3 month average as a basis for comparison agains same period for previous year. Taking a more widely period for averages have 2 disadvantages: more information is required and the lagging effect on data increases.

    Why 3 months? just because in that way we can see only strong trends over time. As an example: a downturn maybe is produced by circunstatial effects, but if during last 3 months the trend still going down you need to take concern about it.

    Is not the same as taking quarter numbers but it is pretty close to those.
    Why is not the same? because by using 3 months average you are able to make forecasting based on more time points than if you are trying to do it with only 4 each year.

  15. 15

    Great post.

    Nick: I got a good chuckle reading this part of your post:

    "Does the M/M or Y/Y trend rise because of that great campaign, or because your closest competitor just spend 10x buying lots of TV ads?"

    I remember being in a web analytics meeting years ago – the company had seen a big increase in a dozen or so metrics and we were all trying to determine what caused what. The web guys were said it was their work – re-organizing / optimizing the site; the marketing guys claimed it had nothing to do with the web guys, but was the result of their new strategies; the boss concluded that it was almost all due to seasonal factors (warmer temps, more sunny days) and the demise of a competitor and that neither the web guys or the marketers could claim too much responsibility for it!

    I am curious about one thing – how are people handling the variations in # of business days in YOY and MOM comparisons? I've seen it addressed a number of different ways in a number of different formats…


  16. 16
    Sarah Madison says

    Our own personal way is probably looking at far shorter trends than 3 months. I think that would be far too long in our own particular business. I think one of our core strengths is the fact that we can adjust to things so quickly compared to the larger, often cumbersome corporates and large enterprises.

    I think it can depend how close you are to the website as to how you see trends and patterns in your minds eye. We perhaps only have about 300 pages on our website and I do know them so personally, how they interact with each other etc that I can usually relate a change in one place from something happening somewhere else, be it on-site or off-site. These are often the things that show as the peaks and troughs.

    Overall almost every website with ongoing work to it has seen a consistent upward trend in general traffic over the last few years I would highly suspect, that is natural saturation.

    For larger companies with bureaucracy etc thrown in and with perhaps thousands of pages and multiple websites then I can definitely see the need for far more 'break down' and 'further segment' approach and due to the amount of data a longer term trend analysis would make sense.

  17. 17

    Google trends for our website and (for lack of a better term) competitors sites has gaps in its data, especially in the last year. Any idea why this happens? See smith.edu and mtholyoke.edu for example.

  18. 18

    Jeff: Sometimes if a strong enough signal is not available then there will be gaps for that time period. You could try alternative tools, say like Compete:



  19. 19

    Dear Avinash,

    Great work! In regards to your advice to segment out direct traffic as a "clean trend." You mention that trends in direct traffic are normally related to people remembering your site and showing up, or having enough engaging content (we are a content site) that they come back over and over. I have always thought the same thing. But, I am confused by something I have been tracking about my site. We have a MUCH higher percentage of new visitors from direct traffic than we do from almost any other segment (almost 88%). I thought that direct traffic was normally returning visitors? Could you please offer some insight into why we might be getting this type of trend?



    Clark Feusier


  1. […]
    Keep in mind that data is more than just numbers. Your data must tell a story, and that always requires context. Following Avinash Kaushik’s advice, “the cheapest way to give context to your performance is to use comparisons to other time periods.” So, before you draw any conclusions from the numbers, he suggests asking a couple of questions like:

Add your Perspective