Off-Shelf Alerting tools are simple

They monitor sales at the product-store level to spot when the difference between “what you did sell” and “what you should have sold” is unreasonably large. 

The most crude tools ask the user how many days of zero sales is a problem but these "guess the number" systems are practically unworkable.  Even a simplistic tool should do better than this.  

Without giving away any secret recipes here let’s go through a relatively easy approach based on real statistics that will work better than “guess the number” methods and start to give you a feel for what a real solution can do.

This is what we want to get to, a rule like.  “If you should have sold more than X (and you’ve sold nothing) call an off-shelf alert.”

Beware, there is a math beyond this point

It's not too bad though and I promise, no equations, no math notation. Stick with it.

First, let’s assume that retail sales follow a Poisson distribution.  A Poisson distribution is often used for modeling retail sales as it’s:

  • discrete (meaning it will only model unit sales of whole numbers);
  • can’t be less than 0 we are not implying negative sales
  • simple, because it needs only one parameter, average demand;
  • it’s often a reasonable approximation to reality.

Furthermore, let’s assume we have sold nothing now for product X at the target store for 3 days and that prior to this point our average sales over a 3 day period is 2.3 units.  This is what a Poisson distribution looks like for average demand of 2.3 units. 

It shows the probability of actually getting sales of 1,2,3, thru 10 units in the 3 day period.  As you can see there is a little over 25% chance of selling exactly 2 and about at 20% chance of selling 3.  It’s also possible (but very unlikely) that you could sell 8, 9, 10 or even more than 10 units.

Now, if this distribution really is a good representation of reality, how odd is it that we actually sold nothing at all in the most recent 3 day period?  This tells us that seeing no sales is going to happen about 10% of the time just based on random chance.   We probably don’t want to call out an off-shelf alert with such a high chance that nothing is wrong, so we wait…

When, a few days later, we reach the point at which you should have sold 4.6 units, and have still sold none, the probability of actually selling nothing through random chance is now just 1%. 

That’s the sort of risk you might be willing to take to call out that something seems to be wrong.  Ignoring errors in data, your estimation of average sales or your assumptions (perhaps it’s not a Poisson) you will be wrong about 1 in 100 times.

PoissonDist_mean_4_61.png

(FYI – you can model this easily in Excel using a POISSON.DIST function)

If you really want to be sure, wait a little longer.  At the point that you should have sold 6.91 units, there is only 0.1% chance that the zero sales you are seeing is due to random chance: far more likely in fact that there really is some issue inhibiting sales at the shelf.

Of course, had you called it correctly after lost sales of just 2.3 units you might have saved 4.6 units of incremental sales  (6.9 – 2.3 = 4.6).  Waiting helped you gain accuracy but it also cost you in lost sales.

Think of this probability of getting zero sales as a “sensitivity level”.  You can set it to whatever you feel most comfortable with (and find an associated average sales trigger point in Excel.)

Setting the right sensitivity-value then is a balancing act: 

  • choose a high sensitivity-value and you will
    • generate more alerts
    • catch problems earlier
    • but a higher proportion of your alerts will be wrong ;
  • choose a lower sensitivity-value and you will
    • generate fewer alerts
    • more of which will be right
    • but you will have lost more sales while waiting on the alert creation.

So, to get a decent balance between accuracy and number of alerts, your rule might look like “If you should have sold more than 5 units (and you’ve sold nothing) call an off-shelf alert.”  

This will give you dramatically better results that just “guessing the number” but it’s really only a starting point. 

The difference between a great OSA tool and a simpler one is the level of accuracy it brings to determining both “what you should have sold” and just how unusual “what you did sell” really is.  Better algorithms will yield both more alerts (capturing a larger proportion of the real problem) AND more accurate alerts (more of them are right).   

Here are some of the approaches a better tool might employ.

  • Using more appropriate distributions of sales. 
  • Getting more accurate estimates of "what you should have sold" by leveraging  similar stores, similar products and similar time periods
  • Using predictive models to account for day of week, day of month, seasonality and promotional activity.
  • Building teams of different models that collectively perform better than each individual member.
  • Reporting on sales lost while waiting for alerts and while waiting for intervention.
  • Estimating the lost sales that could be recovered by your immediate intervention (to prioritize your work).
  • Incorporating feed-back from field operations at to what was found at the store to further improve accuracy.