Ignore SNAP and your product may not be on the shelf when it's most needed - and that means lost sales.

SNAP is the “Supplemental Nutrition Assistance Program” (formerly known as “Food Stamps”) in the United States which puts food on the table for 46 million people every month. 

SNAP can drive big spikes in sales at the store. These spikes are large but short-lived and often pass undetected by reporting and forecasting systems.    

Our whitepaper covers the causes of SNAP spikes, why they vary so much across regions and products, how to identify sales spikes and what you should be doing to maximize sales.

Visit our website for more information.

 

 

Better Business Analytics - 2013 New Year's Resolutions

10 resolutions for Better Business Analytics

 

Firstly - thank you Santa for reading my Christmas list. I love the T-shirt - "Statistics means never having to say you're certain".  With the holiday season coming to a close my thoughts are turning to the New Year and even a certain excitement about getting back to work.  Time for some new year's resolutions !

 

1. Remember that reporting is NOT analytics

One of the key misunderstandings of analytics in the business world and perhaps part of the reason good analytics is not well recognized for the value it creates is that managers think reporting IS analytics.

Per my previous post "Reporting is about what happened; Analytics is about answering your questions ... well-built Analytics or Predictive Models can find insights and opportunities that you will never find by any other means."

Do good analytics, shout about your success, spread the word.

 

 

2. Never, ever build to a manager's spec.

If when you visit your doctor he simply writes the prescriptions you ask for, I suggest you go find a new one, quickly. Unless you possess this expertise yourself, you must rely on your physician to diagnose your symptoms and prescribe action. To do otherwise would be very foolish.

When you (the analytic expert) are asked to build reports or models without explanation as to what it is for - don't do it. Business managers can help you enormously to understand the business context and the killer-questions that need to be answered, but unless they are also skilled in analytics you must bring that expertise to decide how to provide an effective solution.

 

 

3. Make sure your project is worth doing

This is really easy and yet so often overlooked. If you are looking for a million-dollar savings opportunity you are unlikely to find it in part of your operation that totals $2 million in cost. However analytically interesting a project may be, business analysts are paid because they generate a good return on investment. Do some simple estimation to see how big an opportunity could be before unpacking the big-guns. See some examples here

 

4. Use the right tools for the job (or everything looks like a nail)

To recount the old adage: if all you have is a hammer, everything looks like a nail. You must have met people like this, an expert in one particular technological or analytic approach, every single project they undertake is somehow ideally suited for that approach. I've known a few: Excel-experts, database-divas, simulation-specialists, statistics-sages, optimization-??,...

Do you build everything in Excel or in SQL or your favorite BI tool? Are you writing database code in your favorite (non-database) programming language? Has it been a while since you acquired the skill to wield a new tool effectively. Perhaps it's time to extend your skill-set.

 

 

5. Learn your craft (or know when to call for an expert)

The difference in skill-level between an analytic-expert and an amateur is huge. In my experience that skill-gap does not result in +10% incremental return on investment, it's the difference between a successful project and a relative failure.

Know when you are getting beyond your skill-level and either set yourself to learning quickly or call in the cavalry.

 

 

6. Remember that you are first and foremost in sales/change-management

In all honesty, the "analytics" may be the easy part of business-analytics. Even the most technically-adept analytics fail from a business standpoint unless action is enabled. Knowing that implementation of your project could save 20% in cost may give you a nice warm-fuzzy but unless it is implemented you wasted your time and the company's money. Do that repeatedly and you should brush-up your resume.

To be implemented, a result must be repeatedly and effectively sold-in to an organization. Take the time to present your results as a simple compelling argument for change and deliver that message consistently and often.

 

 

Analytics is a lot of fun, but leave a little space for other things in life

7.     Lose that 20 lbs
8.     Exercise more
9.     Eat healthy
10.   Have fun !!


What do you think should be on the list ?


Wishing you an Analytic New Year
 

Better Point of Sale Reports with Variance Analysis (update)

I've just revised and updated one of the most popular posts on this blog adding more detailed descriptions,  a graphical view to the output and more clearly showing path to action based on these reports.  Follow the link below to the updated post.

Better Point of Sale Reports with "Variance Analysis": Velocity, Distribution and Pricing.. oh my !Routine, weekly point-of-sale reports tend to look very similar.  For various time buckets (Last week, last 4 weeks, year to date) we total sales in both currency and units then compare to prior year.  Add in a few more measures to look at retail pricing, inventory,  or service level metrics and you may struggle to make it fit on a page.   And from a CPG standpoint, POS  reporting is only half of the story: a CPG's sales targets are not based on POS, they are based on shipments to the retailer.  How can you get a good overview of POS and reconcile that with Shipments all in one report?

Better Business Analytics - Christmas list

It's that time of year again: my kids have written, re-written and re-re-written their Christmas lists now so we all hope Santa will read them carefully and take notice.

With just a few days left before the holiday season hits I wanted to do something a little more fun, so I've pulled together a list of things that I think every Business Analyst should want.  Some are free to acquire, just costing your time to learn, others you may wish to ask Santa (or your CFO) to provide.

64 bit Operating system, applications and more memory

32 bit operating systems cannot recognize more than 4 GB of computer memory (RAM), regardless of how much you load onto your hardware.

 Forget "Big Data" for a moment - you can fill 4GB relatively easy with desktop tools - if you want to do any serious data processing on your laptop/desktop environment you will need more than that.  RAM is cheap, easy to upgrade and most modern laptops will take up to 8GB without any problem.

Max out your RAM

Note: 8GB of RAM for the laptop I am writing this on is currently $41.99 at www.crucial.com

Hard-drive upgrade

Solid State Disks (SSD) provide huge speed improvements over their spinning, hard-drive counterparts.  If you are are crunching numbers from file or a local database the hard-drive may well be slowing you down.  A modest investment (the 480 GB Sandisk SSD is currently available on Amazon for $360) can save you a lot of time.

Serious computing power

If you have access to a high powered server this may be of less use, but I was surprised recently to find out just how much computing power you can now get in a desktop workstation.  This beast with 2 CPUs (32 cores), 128 GB of RAM and 2TB of super-fast SSD hard-drive costs a little over $6000.  That's an awful lot of power (more than most servers I've worked with)  for the price of 2-3 good laptops.

Excel 2010 with PowerPivot.

Excel is a superb tool for interacting with your data for prototyping and occasionally for delivering models and results.  It can't do everything and for many problems you will need to turn to more specialized tools but any analyst that tells you they don't use it at all is probably not a very effective analyst.  With the ability to write VBA, to customize and embed your own calculations, it can be a very powerful modeling tool indeed.

With Excel 2010 and the free  PowerPivot  add-in, Excel now has the capability to embed  powerful BI capabilities too.  Integrate data from different sources, load enormous amounts of data, vastly more than the worksheet limit of about 1 million records and define more complex measures and functions to apply to this data with DAX (the Data Analytic eXpressions language).  

If you are not already there upgrade now to 2010 - make sure its the 64 bit version to blow past the 4GB memory limitation.

Note: Office 2013 may be just around the corner, but with Microsoft's latest offer if you buy now (Oct'12 thru April '13), you get Office 2013 for free when it's released.  Microsoft's Office Pre-Launch Offer

Time to learn R for statistical computing

OK - I know - those of you who are heavily invested in another package (SAS, SPSS, Statistica, ...) do not get this.  Those are great tools and why should you change?  Well, I'm not suggesting you swap one package for another - I'm suggesting that there is room in your head for more than one tool and R has a lot going for it.

Pros

  • According to Revolution Analytics R is now available as an embedded, in-database analytics engine for Oracle,  SAP HANA., Neteeza, TeraData and others.  This is a very exciting development allowing advanced statistical capabilities to be called from within the database from SQL. Handle routine analytics in-line and on-demand.
  • It's free - really - free.  R is open source tool  you can download from http://www.r-project.org/
  • It has packages to do an enormous variety of statistical analyses, you will probably not need or use even 90% of them.
  • It has great graphical capabilities for visualization.
  • It's callable from .NET languages too via RDotNet.  The interface is not well documented, but it does work.

Cons

  • The user interface is "ugly" unless you love command line applications - I don't.  RStudio does help with this.
  • If you are not used to vector based programming languages it may take you a while to grasp.  (try "The Art of R Programming" for help.)
  • There are a lot of commands to remember, I use my cheat sheet a lot.

Time to learn SQL

 In reality the time to learn  SQL was probably about 20 years ago, but if you have not yet done so, catch up quickly.  SQL (structured query language) is the programming language for relational databases.  Whenever you interact with a database it is some dialect of SQL that interacts with the database to select update or insert your data efficiently.

Most of your data will now come to you from a database of some form.  It is a common requirement to integrate data from multiple sources (without losing any), filter, aggregate and sort as a precursor to more advances analytic routines.  This is what the databases does superbly well - if you're doing this in Excel or indeed any non-SQL environment, you are working to hard.

While you are at it, get a good grasp of database design principles and normalization too.

2013 - the year for Column Oriented Databases ?

Column storage is not a new idea in relational databases, but perhaps it's an idea that is about to mature.  Regular, relational databases physically store rows of data together which is very efficient for transactional databases that update and retrieve a few rows at a time.

column-oriented database stores entire columns of data together.   This is very efficient for systems  where aggregates are computed over large numbers of similar rows - such as in data warehouse reporting and analytics Notably, SQL Server 2012 released its first direct support for column storage this year.  It's not perfect, particularly as the implementation limits how you can write to the database, but it is fast.  My own testing on a large database showed a 10-fold increase in speed on large aggregation queries and almost linear scaling as the number of fields chosen (and hence the number of columns that must be accessed) changed.

I've also had excellent performance from InfiniDb available as a free "community edition" though I suspect I have not fully tested it's capability with tables that only have a few hundred million records :-)

Column Storage is exceptionally fast and could allow for significant changes in system architecture for reporting and analytic tools.  How about POS analytics?  Straight from database to reports without the need for any custom-architecture, pre-aggregation, cubes or waiting on batch processing?  Want to update a few Item attributes and re-run your reports now?  No problem !  (Embed in-database analytic capability so you can do more than add/subtract/multiply/divide and you have a serious number-crushing platform).

The #1 item for your list - a (much) bigger dry erase board.

At times, it seems I can't think clearly without a dry-erase board and pen.  My own workhorse is a 8'x4' board that sees heavy daily use, but bigger ideas need more space.  Amazon has a dry-erase paint on offer that I may have to try - enough for about 30 linear feet of wall for $500.

Jumbo Whiteboard Dry Erase Paint Clear 240 Sq Ft

What do you think?

So what do you think should be on every good Business Analyst's Christmas list?   Am I missing this year's top gift?  What should a "naughty" analyst receive - more bad data?  Let me know in the comments section.

P.S.  Santa - if you're reading this, I rather like this T-shirt from CafePress.com too.

SNAP Analytics (2) - Purchase Patterns

Roughly 15% of the United States population receives SNAP funding to help pay for food and beverage items.  We know that when SNAP (food stamp) funding is released in each state (see SNAP Analytics (1) - Funding and spikes)  this is accompanied by significant sales spikes on some products,

If 15% of all shoppers visit your store within a 2-3 day period you should see a sales spike on  everything they buy, SNAP funded or not . So, why do we not see a spike on everything?  Why are some spikes so much bigger than others?

The chart below shows (simulated) daily point of sales data for a SNAP-responsive product in one state.  The horizontal axis shows days of the month, the vertical shows a sales 'Index' relative to the average day.  You can see that sales around the 1st and 10th of the month are roughly double what they are on other days.

If you also know that this state releases SNAP funding on the 1st and 10th of each month, you might assume that the SNAP shopper takes their newly-charged

EBT

card and within 2-3 days spends the lot.

A relatively high proportion of SNAP funding is spent quickly and this ties well with the idea of a "stock-up" trip.  (If you have the capability to see basket size by date, you should be able to confirm that baskets around SNAP release dates are substantially larger than otherwise.)

So why do I think that this does not represent all SNAP spending?  

Some products are just not good candidates for a once-a-month stock-up trip.  Milk for a month?  I don't think so.  Bananas seem to go soft in my house if I forget them for 1-2 days.  A month's supply of a product may take up more room that I have available in the cart, car,  refrigerator, freezer, or store cupboard.  Some of this will have to wait. 
Some products are more attractive for stock-up trips: larger sizes of frequently consumed products that are stable (on shelf, in fridge or freezer) and perhaps also with "treats" that can be purchased while there is a little extra money available.
According to the USDA, in 2011,  the average monthly SNAP benefit per household was  $284.   Remember that this is $284 spent on SNAP eligible products only:  leave out  non-food/beverage items, hot foods, ready-to-eat items, alcohol and tobacco.  Can it be done?  Yes, but its going to be tough to fit into one shopping cart or in your car or in your kitchen.   $284 is the monthly benefit for the average household of 2.1 people.  Could a family of 4 realistically buy even most of their food once a month?  Even if the SNAP shopper could buy all their food and beverage  items in one trip, they still need other grocery items, paper goods, cleaning products etc. that takes up additional space. 
Finally, the  countrywide adoption of EBT cards, rather than paper vouchers, means the SNAP shopper can spend as little as they need right now without losing any of their benefits.  (Something the similar WIC program is still working on in most states).

Despite the big spikes in sales we see for some products around SNAP funding dates, the SNAP shopper is not buying all their monthly supplies in one trip.

  Some products will be much more responsive to SNAP funding than others because they fit well with the SNAP shopper's trip-type and taste preferences.

So, how do you know if your products are responsive to SNAP funding dates?

   If you have access to the payment details by basket it's a slightly simpler process of querying your data and correlating across to SNAP release dates.  If you have daily point of sale data you need to build predictive models against total sales rather than SNAP specific sales (Do you need daily Point of Sale data?).  In either case, you are dealing with very large quantities of data and need the right tools and the knowledge to wield them effectively (Bringing your analytical guns to bear on Big DataData handling - the right tool for the job).

If you do not know which products, stores and dates will see spikes in demand how can you ensure product is on-shelf?  Ignoring SNAP may be costing you sales.

If you're ready to get started - call me.

SNAP Analytics (1) - Funding and spikes.

Back in August I took a quick look at SNAP, the US government's "Supplemental Nutritional Assistance Program", formerly known as "Food Stamps". (see What's driving your Sales? SNAP? ).  

In 2011, approximately 15% of the US population received SNAP benefits that they can spend on most food and beverage items in store.  SNAP funding has doubled in the last 3 years.

SNAP can create large spikes in demand at the store and yet, because of the way these funds are distributed , this is typically hidden from analysts looking at aggregate data. (see Do you need daily Point of Sale data?... )

If you do not know which products, stores and dates will see spikes in demand how can you ensure product is on-shelf?  Ignoring SNAP may be costing you sales.

This is the first in a series of posts covering Analytics around SNAP and opportunities for driving incremental sales.

The table below shows the days of the month (highlighted in red) when each state distributes SNAP funding (click on it to enlarge):

US SNAP funding patterns by state and day of the month.

At the top of the table, we have the States that distribute all their funding on just 1 day of the month. Out of the 54 States, Districts and Territories shown just 10 of these distribute on one day and (thankfully) they are not the ones with the biggest sales. But, if you are selling a SNAP responsive product you will want to ensure you plenty of stock in-store and on-shelf on the first for these states. 

The States are ranked in terms of the impact SNAP distribution is likely to have within each state: the size of the sales "spike". Fewer SNAP distribution days and the spike will be higher. Perhaps less easy to explain but the closer that SNAP distribution days are to each other, the more their shoppers overlap in store and the higher the sales spike. Consequently, Utah with 3 dispersed distribution days may have slightly lower sales "spikes" than New Jersey with 5 distribution days in a single block.

Somewhere along the line between Nevada (ranked #1) and Missouri (ranked #54) SNAP stops mattering to you because the distribution of fund is so dispersed through the month that you see no sales spikes at all.

74% of funding is distributed on 10 days or less and 10 day distribution can still generate, on average, a 20%-40% increase in sales $$.

BUT, some products are more responsive to SNAP distribution than others; some stores will have many more than the average 15% of their shoppers eligible for SNAP. So,

within the same State expect huge variations in the size of demand spikes. It may not be the average that's causing a problem.

Do you know which products, stores and dates are at risk? If not, how do you know how much demand went unfulfilled?  

next> 

SNAP Analytics (2) - Purchase Patterns

What's the biggest supply chain issue for CPG/Retail?

This morning I picked up a post for this blog from Visicom.  In summary
"We asked dozens of retail store managers this week: what’s the biggest issue you are having with product delivery by vendors? Know what they said? The biggest problem for most retailers is out of stock products."
Despite the low, probably unrepresentative sample size (dozens?) I think there is a ring of truth to this, but, is product delivery the biggest supply chain issue for CPG/Retail?  Not even close.


Of course it's a problem if a vendor cannot deliver the goods, particularly if this is to support promotional activity.  A more flexible and responsive supply chain can reduce this problem and it's a worthy goal to improve that capability.

But, the biggest issue with the supply chain is still the replenishment of the shelf from inventory that the store already has.   The one moment of truth that matters is when a shopper is looking for a product: is it there on the shelf?  With store systems typically reporting that they have sufficient inventory to support sales ~99% of the time, why do one-off reports repeatedly show that on-shelf presence is closer to 90% ?

Part of the problem is that measuring on-shelf presence is relatively difficult and expensive because you can't rely on the stores systems to tell you the answer:
A large part of the problem is to do with so called "phantom inventory".  Phantom inventory that appears to be at the store but in reality has been lost to theft, unrecorded damage, sales recorded as another product or perhaps it is literally "lost": it's in the store but if neither you nor the customer can find it, it's as good as useless. 
Whether or not product is on-shelf can also change (easily) within the day.  Measure it late at night after much re-stocking has been done and it may look a lot better than it does at 6:30 pm on a Friday night. 
Even if product is on a shelf, it may not be on the right one or it's effectively removed from sale by having other product placed in front of it, or being out of reach (like individual cans of cat-food at the back of a 4' deep "warehouse" shelf.)  
So, it's difficult to get good numbers of how bad the problem really is but ~90% on-shelf presence seems to be a reasonable estimate.   This ties with my own experience and is born out repeatedly in ad-hoc studies.

A number of companies (RSi, TR3, TrueDemand - now owned by Acosta) have built business on systems that monitor point of sale data and flag to the Retailer/CPG which products appear to be off-shelf at any point so they can rush someone in to fix the problem: find lost inventory, identify phantom inventory, replace lost shelf-tags or just re-organize the shelf so there is room to replenish a product where it is supposed to be on the planogram.   This too is a good idea, but it will never pick up all off-shelf  issues: they require at least multiple days of zero-sales if not weeks to be effectively picked up.

A CPG may feel that this issue is entirely within the store's control but I'm not so sure.   It seem to me that there are any number of things that a CPG could do to make it easier for a retailer to stock shelves effectively.  Here are some hypotheses that I think would be worth testing::

  • It's  easier to stock products that fit easily on the shelf: pack-size v.s shelf-space could play a big role.
  • 5lb cases get re-stocked more effectively than 45lb cases
  • products that are easy to identify in back-room storage are re-stocked more effectively
  • whole cases are re-stocked more effectively than cases that must be broken open.
  • some shelving fixtures see more off-shelf issues than others
  • some package-forms see more off-shelf issues
  • some departments routinely have more off-shelf
  • planograms are set based on "average" volume but off-shelf issues are driven by peaks in demand (see Do you need daily Point of Sale data?)
Is this the right list?  Probably not: it's certainly incomplete and we would probably find not all of these matter  much to the outcome, but, I do think it's the right approach.  With an appropriate sampling scheme to measure on-shelf presence and some predictive-analytics  we could find (and quantify) what really drives this problem then work to eradicate the root causes to get substantial improvements.

How does 2%-3% more revenue sound to you?  It may not seem like a lot but with typical logistics costs  representing 5%-10% of CPG sales a 3% increase in revenue may well be the biggest thing supply-chain can do to improve the bottom line.

What do you think drives or prevents excessive off-shelf issues?   Or do you think I'm missing the point and there is a bigger issue to be found?  Let me know in the comments section below.

Better Business Reporting in Excel - XLReportGrids 1.0 released

XLReportGrids 1.0 released


XLReportGrids is a FREE, Excel add-in that builds grids of visual reports, from a template, sized to fit the printed page. 

Templates are just a range of cells in a worksheet that are driven by a pivot-table. Build templates with: charts, formulas, images, pivot-tables, text boxes, anything that can be added to a worksheet.


Truckload Transportation - are you paying to ship air ?

How full is a "full" truck?   Not sure?  That's a shame, because when you contract for truckload freight, you pay for the whole vehicle, whether you fill it or not.  

 As I'll show you, the regulations around what constitutes "full" for weight are very complex.  In addition, the 3D jigsaw puzzle to pack product into the trailer space, distributing weight correctly and minimizing damage is exceptionally challenging.  

Get it wrong and you are paying to ship air.  

It is much easier to plan to approximate rules or guidelines than to figure out what's really going on and it is common in the CPG industry to plan transportation loads based on these approximate rules.   Unfortunately, these approximate rules are very dependent on what you are trying to load (as well as who made up the "rule") so you will rarely encounter the same "rule" twice.  Typically what you will see is based on weight, pallet positions or cube.  Rules that restrict what's loaded to maximum limits like:

  • 40,000 lbs of product
  • 2,500 cubic ft of product
  • 48 pallets of product.

None of these are right and they routinely result in shipping air.

Depending on the product and the vehicle,  I can safely, and legally, load much more than 40,000 lbs of product, (much) more than 2,500 cubic feet many more than 48 pallets.  

One particularly bad example I have encountered said

"a truck is full when there is 38,000 lbs of product in it"

.  If I can find a way to, legally and safely load 45,000 lbs in the trailer it’s as though 7,000 lbs of product just shipped

free

, effectively saving 15% ((7,000/45,000 = 15.5%) in freight cost.

So, why is this so difficult to get right?  Let's look at the regulations around weight.

Weight Limits

If your product is "heavy", you will probably hit a weight limit in loading.  "Heavy" in this instance is

roughly

20 lbs per cubic foot or more.  Much less than this and you will probably hit a space limit first (see below).

The government sensibly places restrictions on how heavy a loaded vehicle may be and how that weight must be distributed to be carried safely.   

To get a feel for  the complexity involved, here is a link to the relevant p

age from the US Department of Transportation.  Stay just long enough to get confused then head back here :-)   

Bridge Formula Weights

To summarize (and simplify):

  • The total weight of the vehicle (including product, packaging, pallets, fuel, driver, etc...) must not exceed 80,000 lbs.
  • There are limits for the weight on individual axles (shown below in this graphic from the US Department of Transport)
Note for the mathematically inclined:
If anyone is really interested in being able to calculate what weight should be on each axle given a particular layout of product in the trailer you need a little physics/engineering math.  Here is a great example of how a beam transfers load with an interactive calculator.
With the right math, you can calculate the center of gravity for the product and how that weight would be distributed to the axles.  Move the center of gravity forwards (e.g. by moving heavier product to the front) and you take weight of the rear wheels and transfer it to the tractor unit.  Quite how that weight then gets distributed to to axles 1 through 3 depends on where the "kingpin" connection (between the trailer and the tractor unit) is relative to the tractor units axles. You can build such a model in Excel.

So, can you plan to a total rig weight of 80,000 lbs?  No

, sorry,  there are only so many options for how to layout product in the vehicle and there may be no layout that balances weight well enough across all axles to max out the 80,000 lb overall limit.You need to find the layout that gets closest to that limit and live with the loss.

Alternatively, you may run out of space in the vehicle before you get close to a weight limit.

Space Limits

If product is reasonably light (low density) we will probably be constrained by space before weight becomes a problem..

A reasonably standard trailer's internal dimensions are approximately

  • 52' long
  • 8' wide
  • 8' (usable) height

That's a little over 3,300 cubic ft of space available to you.  However, you are typically loading with palletized product and you will not get to use most of this space.   Let's look at how well you can use the floor space first.

A standard pallet for US grocery is 40" wide, 48" long. The ability to fill the trailer floor-space depends a lot on how well these palletized units fit.  

If pallets will fit in the trailer "wide:wide" it's possible to put 30 pallet footprints in a 53' trailer.

If  the trailer is too narrow or product overhangs the pallet, you have to go to other configurations.   "Chimney stacking" maxes out at 28 footprints.  

If you can't turn pallets at all, "narrow:narrow" loads max out at 26 footprints (although with any overhang on the pallets at all you will likely get 24/25)

In each case there is floor space you cannot use.  Load "narrow:narrow" and you have already lost almost 20% of the available space.

Now let's look at vertical space.  This is much more variation in the height of trailers and in the height of doors to those trailers.  If the door height is restrictive , perhaps because the door rolls up inside the trailer, that will limit the product you can get into that trailer.  Let's assume for now that we can safely get to 8' high. 

How much of the vertical space we can use depends on what we are loading.  Some product cannot be stacked, pallet on pallet, without causing damage.  Some pallets are too tall to allow anything (except an unusually short pallet) to fit stacked on top of it.  Very short pallets may be able to stack 3 or even 4 high.

If I assume 40" high palletized product, double stacked, we can use 80" of the 96" vertical space (83%).

Combine that with "narrow:narrow" loading and we max out a truck at 83% * 83% = 69% space utilization. From a "cube" standpoint the trailer is only 70% full and it's at capacity.  Hence rules like "the trailer is full at 2,300 cubic feet of product" when the trailer has over 3,300 cubic feet of air.

Depending on product dimensions and the ability to stack product we may be able to load much more than 2,300 cubic feet or much less.

Reducing Damage

We could spend a lot of time on this, but for now suffice to say, we would like to load the trailer so that product is less likely to get crushed, to move as the vehicle corners or to land in a heap at the front when the driver must, necessarily, brake.  This puts additional constraints on what product can go where in the vehicle, reducing, again, the weight you can carry and/or the volume you can load into the available space.

Equipment  limitations

If the pallet handling equipment (for either shipper or receiver) can't stack pallets or can't handle them turned, you will necessarily lose a lot of payload capacity.  Unless this is for very short trips where you can pay for the freight-cost with handling labor savings, you need to invest in warehouse equipment.

Not all trucks/trailers are the same.  By design they can be very different, tractor units weighing anywhere between 11,000 and 20,000  lbs.  Trailers can be vary by a few thousand lbs too.  Even equipment of the same make, model and year can be different depending on its setup (kingpin position) and the addition of aftermarket parts.  (Mount a new fuel tank too near the front and see it use up the limited capacity you have on the steering axle).  If weight is an issue for you, you will need to work with your carriers to understand what equipment they are bringing in.  Lightweight equipment is worth more to you, heavyweight equipment should be avoided or contracted at a low enough rate to offset the loss in carrying capacity..

Pulling it all together

Each of these sets of restrictions, weight, space, damage and equipment are complex: accurately modeling any of them is a challenge.  Depending on what you load: heavy or light product, slightly oversize, stack-able or not,  the factors that constrain you continually move.  And, beyond modeling it, you need to optimize: to find the selection and layout of product that maximizes your vehicle loading.

My Take

If you're lucky enough to have consistently-sized, palletized product with consistent low or high density that is consistently  stacked and loaded on consistent carrier equipment  you may be able to build a simple rule that really does max out your truck loading.  Good for you!   For the rest of us, such "rules" are poor guesses at best and can leave a lot of money on the table.  

You are NOT going to build this in Excel unless you have a lot of functional knowledge, very advanced skills in mathematical-optimization and the ability to program your own optimization code.  I have built a load optimization tool (It's a weakness, I like to know how things work).  I have also built my own tools for inventory-optimization, neural-network modeling, genetic-optimization, forecasting and many other needs..  Some of these tools are still in production use today, others were essentially learning opportunities.  In this instance, I chose to buy the software because it has richer functionality than what I chose to build myself.

(On a technical note, application-specific,heuristic optimization routines solve these problems well and quickly.  Mixed-Integer-Programming can get you most of the way, but personally I can't see how to embed some of the damage-reduction ideas into a linear objective function and as the heuristic works so well, you don't need the overhead or complexity of integrating a math programming tool) 

Any load building “optimizer” that asks you to specify a maximum weight or cube or number of pallets is really just automating these approximate rules rather than helping you truly max out the load.  An optimizer that is implemented as a stand-alone package is interesting but not really useful: you need this capability integrated into order-processing, deployment and transportation planning to be effective.  

You need the right tool for the job.  These tools do exist, they work and it's not worth your time or effort to write them again.

There are a number of tools in the market that work in this space.  Google

"load building optimizer"

to see some.  I have not reviewed them all, far from it, but I can tell you that not all "optimization" is the same.  Having "optimize" in the sales literature does not mean it will do a great job for you or that any actual optimization is really taking place.

If you want a quick recommendation I suggest you talk to

Transportation | Warehouse Optimization

. they have a great load building tool and can extend this further into optimizing case-pick routing, pallet builds and shipping locations.

Let's assume you are doing a reasonably good job today without a load optimizer.  What would an extra 5% off your freight spend be worth to you?  Enough to invest in the right tool for the job?

A final thought

These rules-of-thumb can be very persistent.  Having implemented a system to drive increased payload coming out of one manufacturing site, I was perplexed some months later to see that the load factor had shrunk back to where it started.  It turns out that the warehouse supervisor at the plant was adjusting each and every load plan manually to fit with his interpretation of the “rules”.

Do you need daily Point of Sale data? Do you like selling more product?

Most people report on their Point of Sale data in weekly or perhaps even monthly buckets .  If you are interested in seeing a long-term trend or annual seasonality that's OK, but if you really want to know what's going on, to ensure you have product on shelf, and promotions running when your target shoppers are in store - you need daily POS data.  Don't believe me?  Let's look at an example...


I've created the data for this example so that there are no issues with confidentiality but not only does it closely represent reality, I have seen many more extreme versions of this in real data.

First let's look at 2 years of weekly Point of Sale (POS) data.  Anything interesting happening here?


I think we can say that there is no obvious seasonality, and there seems to be no trend in the data: longer term sales are neither  growing nor declining.  There appears to be no pattern to it: sales oscillate up/down in a fairly random looking manner,  sometimes, it's up or down for 2 weeks at a time, sometimes,1 , sometimes 3.  It looks like fairly random noise  - right ?

Now let's look at the same data but in daily buckets (click on the graphic to see the larger version).



There is a lot going on here!  Before you dismiss it as noise, look a little closer.  Can you see a repeating monthly pattern?  Can you see a spike in sales near the beginning of the month and another a few days later?  That's worth looking into.  How about a repeating weekly pattern: much harder to see but retail outlets generally sell more on Friday/Saturday/Sunday than on other days of the week - chances are that is causing some of the oscillation too.

Let's look at this visually for day of the week.  This chart shows sales by day of the week indexed to the "average day".



On Sundays, we see sales of almost 20% more than average, on Friday and Saturday slightly more and for the rest of the week about 80% of average.

Now, let's look at it by day of the month.



This is still showing indexes relative to the "average day" but in comparison to the day of the week visual above these spikes are big!  Clear spikes on day 1 and day 10, with increased sales for 2-3 days after each spike and perhaps a small increase just before each spike?

What could be causing such spikes?  How about SNAP? (see What drives your sales? SNAP?) ?  Or perhaps WIC a similar US government program that subsidizes food stuffs targeted at children.?  Any event distributing funds that routinely happens on a monthly calendar could cause such a problem.

Looking at these charts separately by day of the week and then by day of the month, we are trying to ignore the impact of one factor while reviewing the other.  Doing so introduces a bias into the output: by pure luck, some days of the week will coincide a little more often with the day of the month spikes and their average will increase.  (In this case, Thursday gets an extra spike day, Sunday and Tuesday are one less than average).   What we really need is a tool that can examine the impact of both variables at the same time and tease them apart cleanly.   A regression model does this quite effectively. I'll not get into how this is constructed today, let's just review a few results.

The regression model predicting sales using just the day of the week and day of the month as inputs explains 83% of the variation in this data.  That is a truly staggering result for something that a first glance looked like random noise.

The first chart below shows the daily sales data (blue) with the model prediction (yellow) on top. The model prediction  is very close to actual sales with the exception of a few troughs in sales.  To help mark these outliers, the second chart, plotted on the same time-scale shows where we have the biggest gaps between predicted and actual sales. We have 6 occasions when we have relatively large errors.  Any guesses as to what these are?



This data reflects US retail stores, which are closed, or largely empty of shoppers, at Christmas (end of December), Thanksgiving (End of November) and Easter (April).  Thanksgiving and Easter are not on fixed dates so they are not in exactly the same point on the timeline but if you were to look up the actual dates for 2010/2011 you'll find these coincide perfectly.  These are clear outliers and we can safely remove them from our data (or build a model that flags them correctly).   Removing them further improves the model fit and now we can explain 87% of the data variation in sales from the day of week and day of month.

What does this mean to you?   If your products are heavily impacted by hidden events like this, you may be losing substantial sales opportunities.  

In this case the spike days are about twice the sales volume of other days and I have seen examples that are much more extreme than this.  Even at 2 times normal sales it is likely that you did not have product on-shelf all the time during these spikes.  It's very possible you do not really know how high the spike could be.

If the Retailer's forecasting system is based on weekly buckets (and I'm betting it is), it does not know or understand that this problem exists, it only sees random noise.   But these spikes can be predicted with surprising accuracy as in the model above.  If the retailer forecasting/replenishment system cannot be fixed you will need over-rides every month to ensure product is in place before the peak: specially created seasonal profiles; forecast-adjustments; script-orders; promotional-orders; stock policy exceptions etc. every month to place inventory at the store in advance of the peak.  

If shelf planograms are built based on average sales expectations, there may be a need to increase facings for more volatile products or to add temporary positions for volatile product.  (I love this analogy for the pain that can be involved in working with averages so I just had to include it again)



Finally, consider your promotional plans.  You now know more clearly when shoppers are in the store spending money.  Promotional calendars based on a weekly plan may have your promotion kicking off just after the peak shoppers have left for the month.

My take

Aggregating point of sale data to weekly buckets, in the presence of patterns that repeat on a non-weekly basis, is a superb way of: hiding sales patterns from you; not keeping product on the shelf and losing sales.

Handling this properly means working with more data than you may be used to.  At least 7 times the volume of handling weekly data; 8 times the volume if you must also capture the weekly snapshot.

If you are dealing with programs like SNAP and WIC which vary in their implementation across the country, you will also need store level detail and, as not all products are equally sensitive to any program, you will need item level data too. This can be a lot of data - 1000 products at 1000 stores for 3 years of daily data  is just over 1 billion data points ( 1,000 * 1,000 * 1,095 ).  You will not manage this is Excel or in Access other than for very small samples of data.  

You need the right tools for the job (Data handling - the right tool for the job) a Demand Signal Repository (DSR) that can handle daily data or a very well optimized database AND enough of an analytic engine to run correlation/regression analysis against that mass of data. (Bringing your analytical guns to bear on Big Data ...).




If you're ready to get started - call me.














Better Point of Sale Reports with "Variance Analysis": Velocity, Distribution and Pricing.. oh my !


Routine, weekly point-of-sale reports tend to look very similar.  For various time buckets (Last week, last 4 weeks, year to date) we total sales in both currency and units then compare to prior year.  Add in a few more measures to look at retail pricing, inventory,  or service level metrics and you may struggle to make it fit on a page.   And from a CPG standpoint, POS  reporting is only half of the story: a CPG's sales targets are not based on POS, they are based on shipments to the retailer.  How can you get a good overview of POS and reconcile that with Shipments all in one report?


Point of Sales revenue reporting

Your basic report probably looks something like this (click to view):


It's not pretty is it? It's quite hard to pull useful summary data from.and we haven't even tried to include CPG shipment data yet.

To improve on this we're going to use an accounting approach called "Variance Analysis" and tweak it to fit our needs.  Variance analysis looks at revenue or cost differences and splits it  into components driven by  volume and pricing.   Typically this is used for comparison of budget or plan to actual but we'll use it to compare year on year sales.

The calculations are fairly simple but if you would rather avoid 7th grade algebra, just trust me and skip ahead.
Volume Variance:CurrentPrice*(CurrentVolume - PreviousVolume)
Price Variance: (CurrentPice - PreviousPrice) * PreviousVolume
Add these together  to get a Total Variance.  Now let's prove this really does explain the difference between Current and Previous Sales 
Expand the terms (using introductory algebra) and we get: 
TotalVariance =
                  CurrentPrice*CurrentVolume
                - CurrentPrice*PreviousVolume
                + CurrentPrice*PreviousVolume
                - PreviousPrice*PreviousVolume 
As you can see, the 2nd and 3rd terms "cancel out" leaving us with: 
TotalVariance =
                  CurrentPrice*CurrentVolume
                - PreviousPrice*PreviousVolume 
Which is the exactly difference in Sales Revenue we wanted to explain.   Finally, express Total, Price and Volume Variances as a % of the same denominator (Last year's POS revenue) to get percentage values that are additive like this :
              7% increase due to incremental unit volume
           - 2% due to a decrease in retail pricing
              5% net change
Easy right ?

Let's make it a little more useful and split the volume variance into 2 parts:  for variance driven by changing distribution (the number of stores we sell through) and one for the rate of sale in each store.
VolumeVariance_Distribution = PreviousPrice * CurrentSalesRate    * (CurrentDistribution - PreviousDistribution)

VolumeVariance_SalesRate = PreviousPrice * PreviousDistribution    * (CurrentSalesRate - PreviousSalesRate)
Add these two variances together, multiply out the terms then simplify the result and you should get back to the formula we specified for the volume variance previously.

We can represent these results graphically as a waterfall chart.

  • A waterfall chart explains the gap between 2 values, in this case, last years POS sales and this years POS sales.  
  • Typically these are built as column charts, but I've laid this out as a bar chart so that its easier to read the bar labels.  
  • To make it easier to see the changes, I have adjusted the horizontal scale so that it no longer starts at 0.  
  • The absolute impacts are shown as labels in each bar.  
  • The percentage change (relative to last year's sales) is shown in each bar label.  
  • Allowing for rounding errors these % changes are additive (16.6 - 9.7 - 0.6 = 6.3)


Starting at the top, this chart shows Last years POS Sales and then each additional bar shows incremental changes that explain the gap between last year's and this year's POS Sales.  Red bars are for negative values, green bars positive.

So, this says:

  • POS Sales were hit hard by a significant loss of distribution (-9.7%)
  • Overall, this was more than offset by the retail price increase (16.6%)
  • Despite the retail price increase, store-level velocity was effectively unchanged.  
  • Action: If the decision to reduce distribution was made in anticipation of higher-price and slower sales, there is a good argument to have it restored
  • Action: This product appears to be relatively insensitive to price, a more detailed pricing-elasticity study may confirm allowing a change in pricing strategy to further increase revenue.

Tying CPG Shipment revenue and POS sales together

An almost identical approach lets us calculate price and volume variances for shipment information.  In this instance the price is the cost to the retailer and volume comes from shipments rather than POS but other than that it's still just basic price and volume variances.

Now comes the challenge of connecting these two sets of variances :

To do this, we are going to split the Volume driver for shipments into 2 parts: 
  • Shipment volume that is a direct result of POS unit sales (volume)
  • Shipment volume that did not support consumption and just resulted in changes to the Retailer's inventory level.  (Ship too much and Retailer inventory increases: ship too little and Retailer inventory falls.)
Calculation is simple: the first term is just the POS volume, the second the difference between shipment volume and POS volume for the period.
Note: In reality, we may need to make some allowances for other sources of consumption (theft, damage, loss) or procurement (returns, diverting) but we'll ignore these for now.
Here is the result graphically.  I have laid it out in exactly the same way as the POS Sales Variance Analysis.

At the top level it does not look too bad, a year on year loss of 1.7%.  In reality this is the result of:
  • a significant loss of POS volume (9.2%)
  • a significant increase in retailer inventory (6.4%)
  • a small retailer cost increase. (1.4%)
  • Action: retailer inventory cannot continue to increase.  Unless POS volume is restored prepare for a continuing 9% loss in volume and 7.6% (9-1.4) loss in revenue
We could take this further of course and split the POS Volume component into two parts: one driven by velocity changes and one by distribution.  My preference would be to display these 2 charts side by side for each division,  category or brand using XLReportGrids.

I think this is a big improvement but it's still Analytics-Lite.  We have a much clearer idea of WHAT happened but leave it to the reader to add WHY it happened.   Predictive Analytics can help us get much further down that path.

This should make preparing for those weekly or monthly sales meetings go a little easier :-)  Of course if your numbers aren't meeting plan, you're on your own !  

If you would like the Excel file that shows these calculations you can download it here or just drop me an email








How to save real money in truckload freight (Part II)

In the first post in this series (Part I) I looked at the opportunities to reduce freight cost from traditional transportation management, but the really big opportunities may lie outside of your transportation team's control.  In this post, we'll look at some additional (and very possibly larger) opportunities.

By the time a request hits the Transportation Team the damage has been done.  It’s already been decided that something needs to move, how it needs to move and when it must depart/arrive.  This is where you can really save.

Don’t ship things you don’t need to.

This may be obvious but one of the best ways to save money on transportation is to do less of it.  How much of your transportation is driven by real need?    How much could you avoid by tightening up your forecasting process and inventory policies (see [Inventory modeling is not "Normal"] and [Inventory modeling in action]).  What about making better decisions about re-deployment ("Balancing" safety-stocks across DCs).

Don’t expedite when you don’t need to.

Air-freight is very expensive, expedited (team) truckload freight is better, but even truckload freight that must move NOW will probably not give you time to find the best rate.  With a little lead-time you can save a lot of money.   Does it really need to be there by 9:00 am tomorrow morning?   Even if the answer right now is ‘Yes’ what can we do to avoid getting in that situation tomorrow?

Bypass steps in the chain

Does your freight shoot like an arrow from production to shelf?  No? I thought not.  If you were to track a case from production through a manufacturer’s DC  to a retailer’s DC to store it has probably doubled back at least once.  If you have sufficient volume (and lead-time) skip a step you can save on both freight and handling expenses.  Optimally sourcing each order needs you to consider what it will cost to source (and replenish) that order from ALL viable shipping locations not just the default location.  This is a great analytic/optimization problem but to be successful you need it embedded in your order processing system.

Managing peaks in demand

Your transportation team will typically use a number of carriers on each lane they manage and the wide variation in freight rates for these carriers may surprise you.  Cheaper rates are associated with carriers that really want that volume: perhaps because it naturally fits with their networ,k filling trucks that would otherwise travel empty.  Once that capacity is used up, they won’t want to cover any more freight on that lane today, it would cost too much to position the equipment.  The more erratic your demand, the more likely that you have to tender loads to relatively expensive carriers or abandon your plan altogether and buy freight on the open (“spot”) market. 

Can you have any control over these peaks in demand – you bet!   You can handle this within your own network relatively easily.  When shipping to customers, retailer typically have shipping windows when their orders must be received: ship a few loads a day earlier, a few loads a day later, smooth out the demand within a lane and stop having to beg for capacity as often.

Fill those trucks – really fill them

How full is a full truck?   One particularly bad guideline I encountered said a truck was full when there was  38,000 lbs of product in it.  If I can find a way to, legally,  load 46,500 lbs in the trailer it’s as though 8,500 lbs of product just shipped free, effectively saving 18% (8,500/46,500 = 18%) in freight cost.

OK, this is a very extreme case to make a point, but why would anyone plan to 38,500 lbs?  Well it’s because the actual constraints around load building are complex, relating not just to product weight but distribution of weight in the rig and the 3D jigsaw puzzle to physically fit product in the space available and avoid damage in transit .

In the case of the 38,000 lb rule of thumb, some of the product was low density and hit space limits before it hit weight restrictions.  Of course not all the product had that problem, but the rule was generally used.

This needs a good analytic/optimization tool to get right, but the savings can be substantial.  There will be more on this in a subsequent post.  What's this worth?  The range varies a lot, but perhaps up to 5%.

Optimize your network

Every time there is a significant change to your network, an acquisition, a divestiture or just significant growth/decline it's worth running the analytics again to make sure your distribution network is in tune with your needs.  Do you need to add, remove or expand storage locations?  Is it time to change production policies on which products are made where?  For smaller changes in your supply chain, routinely fine-tuning the product flow to avoid unnecessary storage and handling can yield great results. An optimization model can include manufacturing, warehousing and transportation costs to find the lowest cost option overall.

Savings here can be huge (if changes have made your network seriously inappropriate for the supply chain it supports), but even ongoing fine-tuning is worth a few percentage points.

My take

There can be substantially more money to be saved in transportation from changes made outside of the transportation team than within it.  Look to better  forecasting, inventory-optimization, deployment, order-processing, maximizing truckloads and network optimization to save real money. 

What do you think?  Have I missed something?  Does this fit with your experience?

How to save real money in truckload freight (Part I)

How can you save real money in truckload transportation?   In this post, let’s look at the areas that your transportation team manages directly.

Transportation procurement

I’ve seen a number of supply-chain consulting projects conclude (wrongly) that concentrating purchasing power for transportation into fewer hands would drive significant savings, of the order of 10%.  Are there economies of scale in the truckload market? Yes, but primarily at a lane (origin-destination) level: if you are buying freight for Portland to Los Angeles you do NOT get a better rate because you also want to move freight from Cleveland to New York.

You can save some money in administering freight by concentrating it into one team, you may be able to drive more rapid change in management processes or new systems but economies of scale in purchasing – I don’t think so.

On the other hand, In a recent post [How much money can you save from a Transportation Procurement Rate-Bid?] I looked at a study from C.H.Robinson and Iowa-State researchers that concluded that regular freight bids can reduce your freight bid, to the tune of about 3% over a company that does not conduct regular freight bids.  That number looks right on the money to me, absolutely worth doing, especially if your freight is a large proportion of supply chain cost but it's not really BIG.

I also posted on the challenge of getting good benchmarks for truckload freight [...the challenge of transportation rates] and highlighted the CHAINalytics consortium that provides both excellent benchmarks AND quantifies various strategies for driving rates lower.  For the totality of your freight bill, there may be another 1-2 % points to be had there too.

Dedicated routes / dedicated equipment

If you have enough volume it may be possible to set up dedicated equipment and routes and if you can keep these assets busy it will cost substantially less than if you contract separately for 1 way loads.  I have seen very few opportunities to do this cost effectively: it saves money in the few lanes where it makes sense but it’s probably not going to move the needle in terms of overall costs.

“Carrier friendly” freight / locations

During the last freight capacity crunch, I heard a lot about “carrier friendly freight”.  Think in terms of

  •  Loads that are quick to load/unload. 
  • Shipping Locations that are quick to get through with no waiting
  • Quick payment cycles.

This probably helps but I have not yet met anyone who can put a quantifiable  savings number to any of these “carrier friendly” initiatives.  I suspect that in total they may have a small impact, I’m unsure whether it offsets the cost of creating  it.

TMS optimization

Transportation management systems now include a range of algorithms to help with optimization.  Pulling together smaller orders into single shipments that deliver at multiple stops (“stop trucks”) is a great example of where such systems can drive value.   Or, perhaps it can automatically switch modes for you from truck to inter-modal or rail when you have sufficient lead-time. 

There is real money to be had here, but it does take a lot of setup.  All the constraints around carriers and shipping/receiving locations need to be embedded in the system and maintained on an ongoing basis or it will generate loads that can’t be shipped, moved or received.

Also, understand that the optimizer reviews the options available to you to find the “best” or at least a “good” solution automatically.   If you have given the system relatively few options to consider (by locking down when and how loads must ship), it will have relatively little opportunity to find savings.

My take

A superb transportation purchasing team may be able to save 5% on cost in comparison to a relatively weak team.  To save 10% requires a comparison to almost complete incompetence or a congruence of market and macro-economic forces that will unravel within 12 months.  (You got “lucky”, but it won’t last)

What do you think?   Have you seen opportunities I've missed?

Check out Part II for additional opportunities that may lie outside your Transportation Team's control

How much money can you save from a Transportation Procurement Rate-Bid?

How much money can you really save from a transportation procurement bid? Probably not as much as you might like, but enough to pay for the bid with a good return.

I recently returned from the CSCMP conference in Atlanta where I attended a great session, jointly presented by folks from C.H. Robinson and researchers from Iowa State University.  They have taken a very similar statistical modeling approach to the one I covered in a recent post [..the challenge of transportation rates] to answer questions around the impact of transportation bids and this result is in the public domain. 

You can download a copy of their white paper "Stale Rates Research: Benefits of Frequent Transportation Bids” here.   This study uses relatively little data (~$1 billion in spend) but it all comes from one Transportation Management System (TMS) which should allow for cleaner and richer data up front.

To skip to the chase, the team found that there is an immediate impact to transportation rates from conducting a bid, but:

  •  The immediate rate reduction is relatively modest at $15 per shipment on an average lane cost of about $900 (~1.7 %)
  • The value of the cost reduction decays quickly and is completely gone in about 12 months as conditions change, individual lane rates are adjusted or the lowest-rate carriers take proportionally fewer loads .
  • Across a year, the immediate impact of a freight rate bid is only about $5 per shipment.

That does not seem like much on an average $900 spend  per shipment.

However, they also found that there is a consistent discount associated with frequent (at least yearly) transportation bids that does not decay.  Presumably this reflects a level of comfort from carriers that any rates they agree to now can be revised in a reasonable time-frame.  Shippers that hold at least annual freight bids :

  • save, on average, $25 per shipment in addition to the immediate  impact
  • save about $30 per shipment in total (about 3.3% on the average shipment of $900.)

My take

If you have a billion $ in freight spend , 3.3% is about $33 million.  As freight bids seems to cost tens of thousands of dollars and certainly not millions, that looks like a good return and I imagine C.H.Robinson  will see some of that business to help repay their investment in this study.

Then again, if anyone is telling you that their spiffy new procurement or reverse-auction system is going to help you save 10%-15% on your freight bills, you may not want to commit to that saving with your boss.

I think 3% freight savings is right on the money: definitely worthwhile but perhaps not the biggest thing you should be working on in the supply chain.  What do you think?  

Supply Chain Network Optimization: the challenge of transportation rates

Supply Chain Network Optimization can yield major savings but getting clean data to model with (particularly transportation rates) is a major challenge.

About 10 years ago, I started work on a set of supply chain network optimization projects: finding the optimal placement of factories and warehouses to minimize cost of production, warehousing and transportation.  The optimization work is analytically hard, but much of the challenge is in getting clean, complete data prior to modeling: for transportation rates in particular, how do you get reasonable rates for lanes you have no history on ?

Let’s say we’re exploring the possibility of putting a new facility in Indiana. What does it cost to get from Indianapolis to Los Angeles or to Portland? Well, we don’t know, we’ve never done it. The transportation department could “guess” a rate but just how wrong could they be? They could ask carriers for rates but as this is not (at least yet) for real demand how accurate would that be?

(As an aside, optimization models are superb at finding data errors that make costs too low. Tell the model that you can move goods for free on a lane and you’ll find that lane gets used - a lot.)

One approach to this problem just uses an estimate of freight cost based on mileage. We do at least know the mileage between 2 points.  That should be a reasonable basis for optimization… right ? Well no, it’s not that easy.

Transportation rates are driven by mileage but by many other things too. As you might expect, much of this is related to supply and demand:
  • There are areas of the country that are net importers (lots of freight going in, very little coming out). Freight-carriers charge more for freight going to these locations as they know they will have difficulty getting a paid load coming back out and offer big discounts to get a paid load coming back out. 
  • Similarly there are areas that are net exporters.
  • Some products require specialist equipment (like refrigerated trailers) that is essentially a separate market with its own structure of supply and demand. 
  • There are some (though surprisingly limited) economies of scale at a lane level 
Transportation rating is complex, but  to do network optimization well you need thousands of rates on lanes  - how can you do this well?

At this time I was working for a large manufacturer with a fairly large fraction of $1 billion in freight. They did not source product everywhere or send it everywhere but had reasonably broad coverage. To enable quick turnaround of rates for modeling I used the historical data we did have (freight rates by lane), a few simplifying assumptions and an econometric model (multivariate regression) to both quantify the impact of these factors and predict freight rates for all the lanes we did not have.

This model was very successful, the model explaining over 90% of the variation in the historical data and we used the results for some time in optimization modeling work. But (and it’s quite a big ‘but’) as history did not have as broad a coverage as we really needed I had to make some assumptions to make this work. We really needed more data, much more data to do this well.

Sometime after, the folks at CHAINalytics introduced me to their “Model Based Benchmarking Consortium”. The idea was to pull together freight data from a group of companies, pool the data in one database and build econometric models to explain what drives freight costs. I had to like the approach :-)  : they collected much more freight volume than I could and in doing so built better models. The consortium has continued to grow since then, the modeling approach is continually refined and they can routinely test new ideas around what drives freight costs like:
  • How does the structure of your fuel surcharge program impact your non-fuel costs? 
  • Is it better to have a “core-carrier” program with fewer carriers hauling the majority of your freight or manage a wider diversity of carriers? 
If you are part of the CHAINalytics consortium, you get access to their results and an opportunity to both benchmark your own freight AND to quantify and test which strategies may help you drive costs lower. If you are not part of the consortium, or something like it, perhaps you should be?   


My take

If you're in the market for Supply Chain Network Optimization, talk to your analytic providers about how they source and validate the cost data that feeds into the model.  Once you are looking at the model results it's all too easy to forget about how issues with data inputs or model structure could mean the models are lying to you.  Getting freight-rates wrong can cost you a lot of money.

I have no financial interest in whether you join the CHAINalytics consortium, or anything similar, but I do really like this model based approach to benchmarking.   If you can't join such a program you can still do a lot better than guessing by "sucking on your own fumes" and building similar models using your own data - I did.

How do you handle this problem?   Have you found a better solution?

Order Optimization: Smaller (standard) order quantities, more full-pallet orders AND reduced retailer inventory

If you work in the Supply Chain between CPG and Retail this probably sounds too good to be true, but stick with me and I'll show you a win:win opportunity.


Background

When CPG product leaves a manufacturing facility, much of it is packed into corrugated cardboard cases.  Multiple cases of the same product are placed tightly together into layers on top of (typically) wooden pallets.  (To hold them together in transit they are then completely wrapped in a stretch-film not unlike the plastic film you may use to cover leftovers in your refrigerator)


Ideally, this is the last time the CPG company wants to touch these cases as manual handling is expensive.  If customers order whole pallets, the product can be quickly and cheaply shipped by operators using lift-trucks and just as cheaply unloaded at the customers facility.  

When a customer orders in quantities other than full pallets some additional handling is required.  There are varying levels of automation available though much of this is manual: a warehouse operator will physically "pick" individual cases and "pack" them onto new pallets.  When the customer receives this product they have a more difficult, expensive and error-prone job to figure out what they have received and put it away.

So, why do customer not just order full pallets?  Imagine that they average weekly demand for a product at a customer warehouse is 4 cases.  The pallet pictured above actually has 48 cases on it, so a full pallet has enough inventory to last about 12 weeks.  That's a lot of inventory and the amount of money a Retailer has tied up in their inventory is a big driver of their profitability and stock price.  

So, the standard order quantity agreed between a Retailer and a CPG is a big driver of handling costs and Retailer inventory cost.
  • The CPG wants to ship more orders as full pallets (and reduce their cost of handling).  
  • The Retailer might appreciate a reduction in  handling costs when receiving full pallets but they are heavily focused on reducing their own inventory; a key driver for their financials and stock price.  They can't do this by ordering more than they need.
This lack of alignment means that discussions around "standard order quantity" ( I'll call it "SOQ" from here onwards) often end with a win:lose situation.


Finding a win:win 

So, how do you find a win:win?  There are 2 keys to this:




  • Demand variability: simply put, the Retailer sometimes needs less product than average and sometime they need more.  This variability  can be large - individual orders can easily be double or even triple "average" demand.  Ignoring this variability is causing a big problem.
  • Ordering in multiples: Let's look at how the order processing system handles orders when demand is greater than the SOQ.  In many systems they will order multiples.  If the SOQ is set to 3 cases the only orders you can ever see are for 3, 6, 9, 12, 15 etc....  (FYI - You can make a very good guess as to what the SOQ is even if you don't know for definite by looking at order history and calculating the "Highest Common Factor".)
Now with these 2 keys in mind, consider a product with not quite enough average demand to justify full pallet ordering:

Typically CPG's will push to round-up to full-pallets (a "win" for the CPG but a "lose" for the Retailer).  The Retailer may well stand their ground and insist on SOQ being no more than  X days of average demand (a "lose" for the CPG).

Consider this option:  what happens if instead you reduce the SOQ to a half pallet ?
  • sometimes demand is low and orders actually are for half a pallet.
  • often demand will be higher, triggering the order processing system to order multiples of the SOQ, probably a full pallet, or possibly 1.5 or even 2 pallets.
  • as the ordered quantities more closely match demand, retailer inventory goes down.
The difference between setting an SOQ that will easily (and often) round-up to full pallet quantity and routinely ordering "almost a full pallet" can be significant both in handling and retailer inventory costs.   

Smaller (standard) order quantities, more full-pallet orders AND reduced retailer inventory.  Sounds like a win:win doesn't it ?






Inventory modeling is not "Normal"

We can build models to know how much inventory we need to hold of each product in each location. Do this well and you improve service levels AND reduce inventory.   I've posted on this topic before including an online calculator from a relatively simple Excel model to help you visualize the relationship between uncertainty, lead-time and case-fill rate. (Check out How much Inventory do you really need ?).

I wrapped up that post with a warning/disclaimer that the spreadsheet model was really too simple for real life use, but I didn't tell you why.  Now here's the kicker:  many packages appear to have the same problem and can cause you to severely underestimate your inventory needs and lose sales.


Just so you know, we are going to dip a toe into statistics here, but if you are a supply chain manger and you need to optimize your inventory usage, you need to know this - stick with it.  I'll be gentle, it won't hurt, I promise.

The key problem is that models assume uncertainty in demand follows a Normal distribution.  Something like this:
"Bell-shaped curve" - a Normal distribution


Let's take a simple example to see why this is a problem.  Let's say your forecast for the next month is for 100 units.   The normal distribution for your sales then would be centered at 100 units




You could sell more than 100 or less, but just how much more?  What if this is a hard-to-forecast product that could sell much more or much less.  Let's add in the rest of the horizontal axis.




Sticking with visual analysis for now, it looks as though you expect to sell  around 100 units (your forecast) and you could sell as much as 300 (yeah !!) and as little as -100... excuse me ?  How exactly are you going to sell -100?   Despite the widespread practice of representing returns as negative sales (not a good idea) that is not what this means.  This result is a physical impossibility that cannot ever happen in reality.  We really need a distribution that understands that negative sales are not possible.  Something like this:




The green line tells us that worst case sales are 0 (phew) but could go up to.. about 400 ?  Now what I know that you can't tell visually is that both of these distributions have exactly the same average and the same variability (standard deviation), either one could be used to model the same level of forecast and demand uncertainty...but... the green one shows a realistic possibility of much higher sales, sales that you may want to protect by having extra safety stock.    Set your safety stocks based off the Normal distribution and you will  miss that peak demand when it does happen.  How do you feel about cutting 100 units from total demand of 400?   Using the Normal distribution here can seriously damage your wealth.  (FYI - The green line in this case is from a LogNormal distribution.  It's not the only option available to us but I'll hold that detail for another post.)

BTW - If you want to get into Inventory optimization, modeling this correctly will be much more effective in helping you balance inventory and fill-rates.

If it's dangerous then, why is the Normal distribution so heavily used in practice?  Well, if your uncertainty around what you are going to sell is much smaller, it does a good job. The chart below shows the same Normal and LogNormal distributions for a product with much less demand uncertainty




The 2 distributions are practically the same, though the LogNormal still predicts slightly higher sales at the upper end - the end we are trying to protect with safety stock.

(It's also true that it's just easier to program the math to use the Normal distribution.)

In my experience, demand uncertainty  is often, even normally (pun intended), big enough that using the Normal distribution will cause you to severely underestimate safety-stock and lead to more cut orders (and lost sales) than you planned for.

Does your inventory model have this problem?



Inventory modeling in action

Inventory modeling and inventory optimization attempt to drive out unnecessary inventory from your systems, to improve service levels to your customers.  This does work and can drive very significant reductions in inventory, but, if you lack discipline around execution you will not get as much value as you should.

The list of 10 watch-outs that follows is based on my experience.  Some of  these are mistakes I've made and learned from, others are mistakes I have observed.
  1. You do need a good model.   I've seen a lot of inventory models, some are more "unique" than useful.  This is an area with a solid analytic/statistical framework available that has been real-world tested.   Here's a link to my online calculator  [How much inventory do you really need] or check out this Wikipedia entry for a basic introduction   http://en.wikipedia.org/wiki/Safety_stock  You are not going to build something with common-sense or street-smarts in Excel that can come close.  Do the necessary learning, hire someone that already has it or buy into one of the commercially available packages.
  2. Better models yield better results.  If inventory really matters to you, you may want to invest in a more rigorous level of modeling.  A basic, statistical model will generate results if used well.  A model that better fits your reality will let you cut deeper/faster.  For a simple example check out: [Inventory modeling is not "Normal"].  Some other areas that may warrant extra work/investment: 
    • fitting the most appropriate distributions of uncertainty
    • capturing demand uncertainty effectively
    • handling multi-level distribution networks
  3. You need people who know how the model works.  This does not have to be everybody that ever touches it, but someone either in your organization or that you have easy access to must understand this.  I've seen system-implementers (consultants in this case) hamstring a perfectly good commercial package because they did not understand how the models worked and set it to provide bad recommendations. ("Garbage in - Garbage out")
  4. Integrate inventory recommendations into your planning system.  With good models you will generate unique targets by product and location.  To make this part of the planning process this data must be integrated in to the planning systems: it's completely unreasonable to ask planners to keep this amount of information "in their heads" and to act on it appropriately.
  5. Get supply planners involved. Don't ignore one of your best assets - the people that work with product supply every day.  Their intuitive sense of what works is probably not far wrong, whereas computers have no intuitive sense.  The model should help them challenge their ideas  and refine them it does not replace sense.   Get them involved!  Teach them to work the model inputs, to understand why the model does what it does and create ownership of the answers.  If you implemented months ago and you're still hearing "Your targets aren't right..." you need to work on this.
  6. Measure compliance.  You will, I'm sure, be measuring what happens to your aggregate inventory and service levels.  These should be moving steadily in the right direction.  You need to look down in the weeds though to know if you are getting full value.  Excess inventory mounts up a with few cases here, a weeks extra supply over there and it all adds up. Exception based reporting to find item:locations that are not under control drives deeper/faster results.
  7. Use the right target for each decision.  Your system probably has just one place to embed a safety stock or replenishment point value for each [product:location] combination.  If you have done your modeling work effectively this value will represent your primary replenishment process.  But, what happens if you have alternatives?  Perhaps sourcing from another (expedited) supplier or re-deploying inventory from another facility?  The assumptions you made to generate your target are wrong for this alternative option.  Ignore this and you can spend a lot of money very quickly. Check out [Balancing safety stocks across DCs
  8. Continually update/revise.  Let's assume that your system will automatically record and update statistics around demand uncertainty so you do not have to touch every model every week.  Even so, things change:  manufacturing constraints change over time, your understanding of your own supply chain improves, your willingness to take risks shifts, ... things happen.  If you lock in your inventory targets once a year and forget them your supply chain will under-perform.  Continually tweak and revise the models.  these targets drive your supply systems, the better they are the better you will look.
  9. Resist the temptation to react to one-off events.  It is the nature of uncertainty in your supply chain that most of the time, most orders for most products are filled completely.  When there is a failure, orders get cut, service level for that product drops and it becomes the center of  attention for a period of time.  Understandably, this can become uncomfortable.  However, if your aggregate service-level metrics are in-line and this  appears to be a one-off event, reacting to it by pushing up inventory achieves only that - it pushes up inventory.  
  10. Be tough on inventory AND tough on the causes of inventory.  Your inventory models are not just a way of setting accurate inventory targets.  They are key to understanding what changes to your supply chain would drive lower inventory.  What is the value of:
    • 5 points of improvement in forecast accuracy?
    • a 25% reduction in lead-time from production?
    • replenishing once a week rather than once a month?
    • a delayed deployment strategy for hard-to-forecast  products
    • reducing service levels by 0.5 points
    • stratifying service targets for different groups of products (typically based on volume and/or demand uncertainty)
    • allowing service levels for individual product:locations to float in a wider range and using optimization to minimize the overall inventory while maintaining the aggregate service level target

A successful inventory modeling/optimization project does need a good model but it also needs  great execution.

What's driving your Sales? SNAP?

SNAP is the “Supplemental Nutrition Assistance Program” (formerly known as “Food Stamps”) in the United States.  . “We put healthy food on the table for more than 46 million people each month.” http://www.fns.usda.gov/snap/

Could this be driving spikes in your sales?  If you are involved with food/beverage sales in the United States, very possibly, it could.  

SNAP sales spikes can be large and short-lived.  Ignore SNAP and your product may not be available on the shelf when it's needed, losing sales opportunities.
A few key facts:
  • Most food and beverage items are eligible for purchase with SNAP, excluding hot foods, foods consumed in the store, alcoholic beverages and tobacco products. 
  • 2011 redemption exceeded $71 billion. 
  • Redemption has doubled in the last 3 years 
  • Approx. 15% of the U.S. population now receives SNAP benefits.  Participation rates vary substantially across states, counties and neighborhoods.
  • Average monthly benefit of $134 per person enrolled. 
  • Eligible non-participants left $29 billion unclaimed in 2011 
  • Funds are made available by EBT card which can be used very like a pre-paid debit card to purchase goods.  Unused funds can roll-over to next month.
OK, we've established that it's big, actually very big, that it's relevant to a broad range of food and beverage products and that it impacts purchases for a large section of the population.

What makes it challenging for a retailer is that each state independently handles distribution of funds for its population and they are a long way from being consistent .  (Actually in some cases, individual counties can control how funding is distributed).  Each state distributes funds on specific days of the month of their own choosing.  Typically, they decide who gets funds on which day based on digits from a case file or Social Security number or the starting letter of the last name).

For example, as of 8/12, Virginia distributes all funds on the first of the month, Maine distributes funds somewhat equally between the 10th and 14th.

SNAP baskets are large and it appears that the bulk of these funds do get spent very quickly after being made available, so, if you're trying to sell food in Virginia, you need to be very well-stocked with products that resonate with the SNAP shopper on the 1st of every month.

Moreover, not all stores (even within the same state) will see the same spikes in demand because of differences in their shopper base.  If, for your product, the spike in sales at state level is  +50% over a "regular" day, there will still be stores that see almost no impact at all and some that see much more.

That distribution  of funds is based on day of the month causes additional issues with most forecasting and reporting systems that are based on weeks rather than days .  These sales spike (and subside again) quickly, at a rate that is largely hidden by looking at sales in weekly buckets.
  • Do you know which of your products are most heavily impacted?
  • Do you know which stores see the biggest spikes in sales?
  • How can you persuade your replenishment system to plan appropriately for such short term spikes?
  • Can your supply chain ensure delivery in advance of peak demand?
  • Are your promotions planned appropriately around SNAP distribution dates?
  • Are your planograms designed to handle these spikes  in demand?
  • Can your planograms flex to handle spike and non-spike days or have you 'built a church for Easter Sunday'?
  • Are you losing sales?