Next Gen. DSRs - data blending

Over the last few months I've written a series of posts on Demand Signal Repositories.  These are the specialized database and reporting tools primarily used by CPGs for reporting against retail Point of Sale data.  

There are a number of good tools in the market-place and you can derive substantial value from them today but the competitive landscape is changing...fast. Existing tools found a market because they are capable of sourcing, loading and reporting against vast amounts of data quickly.  To do so they have employed a variety of complicated architectures that are now largely obsolete with recent advances in technology that can make solutions: faster, cheaper and more flexible.

Cheaper alone may be a win in the market today, but if all we do with this new power is report on "what I sold last week" more quickly and at a lower price-point I think we are missing the point.  

The promise of a DSR has always been to explain what happened but much more importantly why  and existing tools struggle with this:

  • they do not hold a rich enough repository of data to test out hypotheses.
  • their primary analytic tools are report-writers and pivot-tables (by which I mean that they really don't have any)

We'll come to analytics in a later post, but for now let's think data because without that there isn't very much to analyze.

Imagine that I've spent a few hundred thousand acquiring point of sale data into my own DSR and now I want to really figure out what it is that drives my sales.  

How about weather.  Ignore for the moment whether or not a future forecast is useful, but how about using weather data to explain some of the strange sales in history so that I don't trend them forwards into the coming year?  I can get very detailed weather data from a number of sources, but can I, a system user, get that data into my DSR to start reporting against it and better yet, modeling?  Probably not

How about SNAP, the US government 's benefit program that funds grocery purchases for roughly 1 in 6  US households?  SNAP can drive huge spikes in demand for key products and I can easily go to usda.gov and find out exactly when SNAP dollars are dropped into the marketplace by day of the month and by state.  With a little time on Google I can even see when this schedule has changed in the past few years.  Can I, a system user, get this data into the DSR for reporting/modeling?  Nope.

The same is true for many additional data sources you wish to work with (Promotional  records, Twitter feeds, Sentiment analysis, Google trends, Shipment history, master data, geographic features, proximity to competitor stores, demographic profiles, economic time series, exchange rates etc.).  

These are all relatively easy to source datasets but if the DSR vendor has not set it up as part of the standard product, you are out of luck: the technical sophistication necessary to source, load and , especially, match key fields data is beyond what a super-user, and in many cases, a system administrator can handle.   Can it be done?  Maybe, depending on your system, skill-level and security-access, but it's going to cost you in time and money.

Matching data in particular can be a real bear - it will be rare that you are matching products at the same level of granularity (item, location, date) and with the exact same key fields.  Far more common to be matching weekly or monthly data to daily,  state or county data to zip-codes and product groups to shoppable items.  And do it without losing any data, sensibly handling missing data and flagging suspect data for manual follow-up.

So if you really want to do some analysis against e.g. SNAP what must you do?  Download a small ocean of detailed POS data so you can (carefully) join it to your few hundred records of SNAP release data in a custom database or analytic app, build the models and then (because you can't write the results back out to the DSR) build a custom reporting engine against these results.  This makes no sense to me.

The solution is something called data-blending which tries to reduce the pain of integrating multiple data sources to a level that you could contemplate it in near real-time.  While I have not yet seen a solution I would call perfect the contrast with the standard, locked-down, DSR scenario is impressive.  

Much of what I have seen so far happens at the individual's level: where you are doing the match in-memory and without impacting the underlying database or fellow users in any way.  In many cases, particularly for exploratory work, this is preferable, but it's far from an ideal solution if you need to process against the detail of the entire database or have multiple needs for the same data.

The future, I think, will include such ad-hoc capability, but I suspect it also includes a more flexible data model that let's an administrator rapidly integrate new data sources into the standard offering.

Averages work ! (At least for ensemble methods)

After an early start, I was sitting at breakfast downtown enjoying a burrito and an excellent book on "ensemble methods".  (Yes, I do that sometimes... don't judge)

Ensemble Methods in Data Mining: Improving Accuracy Through Combining Predictions (Synthesis Lectures on Data...by Giovanni Seni, John Elder and Robert Grossman(Feb 24, 2010)

For those who have built a few predictive models: regression , neural-nets, decision trees,...  I think this is an excellent read, outlining an approach that can deliver big improvements on hard to predict problems.  The introduction provides a very good overview:

Ensemble methods have been called the most influential development in Data Mining and Machine Learning in the past decade. They combine multiple models into one usually more accurate than the best of its components. Ensembles can provide a critical boost to industrial challenges...

Ensemble models use teams of models.  Each model uses a different modeling approach or different samples of the available data or emphasizes different features of your data-set and each is built to be as good as it can be.  Then we combine ("average") the prediction results and,  typically,  get a better prediction than any of the component team members.

When I was first learning predictive modeling as an under-graduate the emphasis was on finding the

best

model from a group of potential candidates.  Embracing ensemble methods, initially, just felt wrong, but the proof is in the performance.

It sounds easy, but, clearly, this is more complex than building a single model and if you can get a good-enough result using simple approaches you should.  You'll know when it's worth trying something more high powered.

With thanks to my friend Matt for this simplification, this may be one of the few contexts where we can say

"Averages work!!"

As a reminder that working with averages (or aggregations of any kind) is generally dangerous to your insight, take another look at this post on why you should be using

daily point-of-sale data

.

Or, consider this...

The right tools for (structured) BIG DATA handling - more Redshift

In my recent post on The right tools (structured) BIG DATA handling , I looked at using AWS Redshift to generate summaries from a large fact table and compared it to previous benchmark results using a columnar database on a fast, SSD drive.

RedShift performed very well indeed, especially so as the number of facts returned by the queries increased.  In this initial testing I was aggregating the entire fact table to get comparable tests to the previous benchmark, but that's typically not how a reporting (or analytic) system would access the data.  In this follow-up post then, let's look at how Redshift performs when we want to aggregate across particular records.

Test setup

For this test, I am using the same database as before (simulated Point of Sale data at item-store-week level with item, store and calendar master tables) on 4 'dw1.xlarge' AWS nodes. For each query I am summarizing 5 facts from the main fact table, joining to each of the master tables and using a variety of filters to restrict the records I want to aggregate over.

The first record shows performance when we have no filters at all, summarizing all data in the fact table.  That's 416 million records in just over 30 seconds with an average speed of 13.4 million records per second.  Very respectable !

The second row uses a filter - WHERE category = 'Type 2' - based on a field in the item master table, which is associated with roughly 20% of the  fact table records.  Aggregating 83 million records in 26 seconds is almost as slow as aggregating across all records.  Not good.

The third row filters on a field from the Calendar master table to return only those weeks in the year 2011:  50 million records in 2.9 seconds.  This is quick and in speed terms, faster at 18.4 million records/second than the original query.

What's going on ?

This apparently odd behavior is driven by my choices when defining the table for distkey and sortkey (see the SQL below) 

.

Note that Redshift doesn't use indexes or partitions as I am used to seeing them in relational databases so, in many ways, table definition is a lot simpler.  Remember that Redshift is running on a cluster of processing nodes, not just one machine.

distkey

defines how the data in this fact table should be spread across the multiple nodes in the cluster.  In this instance I chose to spread it out based on the store identifier (storeid).  Redshift will try to put records with the same storid on the same node.  (More details om selecting a distkey here ).  Note that this would primarily help with faster joins.  I did not add the same distkey to the store master table, but as that is small, just a few hundred records, copying it between nodes to make a join should not be especially impactful.

sortkey

defines how records will be sorted on each node.  Redshift uses this information to optimize query plans and will (hopefully) skip past entire sections of data that are not within the filter.  I could have used multiple fields in the sortkey but chose to get started with just 1, the week identifier in the fact table and associated calendar master table,  periodid . (More details on selecting a sortkey here )

So with this in mind let's look at the results table again.

I don't think I'm benefiting from the distkey at all in this test set as I set the distkey to be storeid and none of these filters are store-based.  The filters are either based on time (the sortkey) or category, an item attribute which is not part of either sortkey or distkey.  And yet, the speed difference between row 2 (which presumably sees no benefit from either setting) and row 3 (enhanced just by the sortkey) is dramatic: almost a 6-fold speed increase!

That speed drops for the 4th and 5th records is, I think, more to do with some latency in query execution, rather like we saw in the previous tests .  These queries hit significantly less data and as the data quantity falls any latency becomes an increasingly large proportion of the whole.

I did not put a lot of thought into choosing distkey and sortkey values for this test but it certainly seems as though choosing these correctly could have a dramatic impact to the speed of queries.

 Truthfully, there isn't very much to tweak here, so optimizing within these boundaries should not take too long.  I could really grow to like simple.

More testing to follow,

Next Gen. DSRs - it's all about speed !

Recently, I have been working with a new-to-me BI tool that has reminded me just how much speed matters.  I'm not mentioning any names here, and it's not a truly bad tool, it's just too slow and that's an insight killer!

Continuing my series on Next Generation DSRs , let's look at how speed impacts the exploratory process and the ability to generate insight and, more importantly, value.

Many existing DSRs do little more than spit out standard reports on a schedule and if that's all you want, it doesn't matter too much if it takes a while to build the 8 standard reports you need.  Pass off the build to the cheapest resource capable of building them and let them suffer.  Once built, if it takes 30 minutes to run when the scheduler kicks it off, nobody is going to notice.

Exploratory, ad-hoc, work is a different animal and one that can generate much more value than standard reports.  It's a very iterative/interactive process.  Define a query, see what results you get back and kick off 2-3 more queries to explain the anomalies you've discovered: filter it, order it, plot it, slice it, summarize it, mash it up with data from other sources, correlate, .., model.  This needs speed.

For a recent project, I was pulling data to support analytics: descriptive, inventory-modeling and predictive models.  Define a query based on the features I am searching for, submit it to run, then wait... 20 minutes to an hour to get a result.  When the results come through (or fail to do so with an error message that defies understanding) I have long since moved on to some other task so as not to completely destroy my productivity.  It takes time to get my head back in the game and to remember what I was trying to achieve and productivity takes a dive.  I didn't need just one query of course, more like 10, so I would have 3-4 running simultaneously and extensive notes scribbled on a scratch pad to try and keep track.

Admittedly, what I am doing here is complex and the tasks I was using to fill-in gaps with were also relatively complex (e.g. simulating a large-scale, retail, supply-chain replenishment and forecasting system in R), but still, it took 2 days of fighting with the beast to get what I needed. Progress was painfully slow on everything I attempted in this time period and my frustration levels were off the scale.

This system is forcing me to multitask.   According to one study, this can reduce your productivity by 40%. A 40% decline in productivity is a bad thing, but, frankly, it felt worse: I did not measure it and I'm not about to create a study to prove it, but switching between highly complex tasks and with a BI tool that kept interrupting me felt much worse than a 40% drop.

Whether my perception is right or not, it's perception that drives behavior.  If using the system in this manner is painful it will inevitably be used less often by fewer people and more of the insights buried in the data will stay there.

Not that things haven't improved.  One of my first jobs after college was to build computer simulations of factory production lines to test out changes in new equipment or layouts before incurring any significant capital expense.  Some of these studies were very successful, but very complex to build and, running on the hardware of the time, I would start a simulation run when I went home and check the results when I got in the following morning.    Some mornings could be very depressing;  realizing that I had an error in a part of the model, had no useful results to build on and no chance to run again until that evening.  Consequently,  studies that took 1-2 weeks of work time, could take elapsed months to execute.

If you've been following this series  you'll know that I am strong proponent of using newer database technologies (mpp, memory, columnar, ...) to both simplify the data architecture AND provide substantial speed increases over existing systems.

If you still just want your standard reports, don't worry about it, just hope your competition is doing the same.

Visualizing Forecast Accuracy. When not to use the "start at zero" rule ?

I recently joined a discussion on Kaiser Fung's blog Junk Charts ,  When to use the start-at-zero rule,  concerning when charts should force a 0 into the Y-axis.  BTW - If you have not done so, add his blog to your RSS feed, it's superb and I have become a frequent visitor.

On this particular post, I would completely agree with his thoughts was it not for this one metric I have problems visualizing, Forecast Accuracy.   Forecast Accuracy is a very, very widely used sales-forecasting metric that is based on a statistical one, so let's start there.  

The statistical metric (Mean Absolute Percentage Error) looks at the average absolute forecast error as a percentage of actual sales.    Some of the errors will be positive and some negative but by taking the absolute value we lose the sign and just look at the magnitude of error.  (We handle optimism or pessimism in the forecast with a different "bias" metric).  

There is occasionally heated discussion in the sales forecasting community about exactly how this should be calculated but let's save that for another day as all forms I am familiar with have the same properties with regard to plotting results.

  • perfect forecasts would have no error and return 0% MAPE, this is our base.
  • there is no effective upper bound on the metric (you can have 400% MAPE or more)

If we were to look at this across a range of product groups (A thru K) it might look something like this.  The Y-axis is forced to start at 0 and the length of the bars have meaning, Product D really does have almost twice the error rate of product A.  This plots out very nicely, it's hard to misunderstand  and the start-at-zero rule certainly does apply.

Now convert MAPE into a Forecast Accuracy with this simple calculation.

Forecast Accuracy = 1 - MAPE

I can only assume this metric was created in the sense of "bigger numbers are better".  It's in widespread use, it's part of the business forecasting language, and no, I can't change it.  Perfect forecasts are now at 100% and there is no lower bound on the metric, it can easily be negative.

This causes me a problem.  Check out the chart below: this is the same data as before but now expressed as Forecast Accuracy rather than MAPE in a standard Excel chart.  Excel is trying to help (bless it) and put the 0 value in without my help.  Work in supply-chain and you will see a lot of these.

The zero value has no special meaning on this metric, so starting at 0 is very misleading:  80% accuracy (20% MAPE) is not twice as good as 40% accuracy  (60% MAPE).

Allowing the minimum of the y-axis to float  does not solve this either (below)

I really don't know what this is trying to tell me... some product groups are better than others perhaps ?  Certainly, relative size is meaningless.

"Abandon it" you say  "go to a line chart".  Line charts often have floating axes and they do not emphasize relative size nearly as much as a bar-chart does (below).

Perhaps it's less confusing/misleading than the previous charts but I still don't like it. because there is data I want to compare relative sizes for (the MAPE) and line-charts seem most useful when trying to show patterns.  I have no reason to expect a useful pattern to form from product categories: I just sorted then alphabetically.

My thanks to the contributors on Junk Charts for helping me clarify my thinking on this.  I don't know that there is a great answer but as it's one I run into all the time I do want to find a better solution.  (FYI - It's just hit me that there are another set of supply-chain metrics for order fill-rates than have the exact same problem)

How about forcing the upper limit on the Y-axis to 100% and letting the lower limit float? I am trying to emphasize the negative space between the top of the bar and 100%, essentially the error rate.

I'm not entirely happy though, those heavy bars do draw the eye, and you would have to educate the user to read the negative space.  How about a dot-plot instead ?

You would still have to learn how to read it properly ...

Or how about this?  Inspiration or desperation?  I'm now plotting the bars down from the 100% mark, emphasizing MAPE while still using the Forecast Accuracy scale.  I'm not entirely sure yet, but I think I like it and if I generalize the "start at 0" idea to "start at base" it may even fit the rule.

What do you think?  Which version best handles the compromise between a user's desire to see the metric they know and my desire to show them relative error rates?  Have you a better idea?  I would love to hear it - this one really bugs me !  Can you think of any other examples of metrics where 0 is meaningless?

Recommended Reading: The Definitive Guide To Inventory Management

A little over 15 years go now, I was set the task to model how much inventory was needed for all of our, 3000 or so, products at every distribution center.  Prior to this point, inventory targets had been set at aggregate level based off experience and my management felt it was likely we had too much inventory in total and what we did have was probably not where it was most needed. (BTW - they were absolutely right and we were ultimately able to make substantial cuts in inventory while raising service levels).

I came to the project with a math degree, some programming expertise, practical experience simulating production lines, optimizing distribution networks, analyzing investments and with no real idea of how to get the job done.  The books I managed to get my hands on gave you some idea how to use such a system but no real idea how to build it.  They left out all the hard/useful bits I think.  So, I set about to work it out for myself with a lot of simulation models to validate that the outputs made sense.

I still work occasionally in inventory modeling and I'll be teaching some components this fall, so I have been eagerly awaiting this new book : The Definitive Guide to Inventory Management: Principles and Strategies for the Efficient Flow of Inventory across...by CSCMP, Waller, Matthew A. and Esper, Terry L. (Mar 19, 2014)

Full disclosure here: one of the authors, Dr Matt Waller is a friend and colleague of mine.  He brings an astonishing level of expertise to many areas of supply-chain management and inventory modeling is clearly no exception.

Together, Matt and Terry Esper have produced a book that (had I possessed it 15 years before it was published) could have short-cut my inventory modeling project by approximately 6 months.

This is not a long book, not quite 200 pages in fact, but it is no lightweight.  If you just want an overview of the topic you could skip the math, but my guess is that if you do that, you will never really understand.  The math is not particularly hard and it's presented in a sort of hybrid math/Excel fashion that I find easy to follow.  I'll also say that I hit my first "ahah!" moment before I got to page 20.  I won't embarrass myself by telling you what it was but something that had bothered me for years suddenly clicked into place.

Unlike most discussion on this topic, this book looks at inventory modeling from the manufacturer's or supplier's point of view right through to the retail shelf.  They also provide a number of means to estimate components of inventory from historical data so you can assess how well your planning and execution system are tracking to plan: something I was aware of but had never really thought through how useful it could be.  Details on how to conduct your own simulation studies in Excel and an overview to the most commonly used forecasting approaches that feed the inventory models round it out.

It's all here, what you need to understand (and if you so wish, build) a system to optimize your inventory  holding.  I highly recommended it.

Next-Generation DSRs (multi-retailer)

This post continues my look at the Next Generation DSR.  Demand Signal Repositories collect, clean,  report-on and analyze Point of Sale data to help CPGs drive increased revenues and reduce costs.

Most CPG implementations of a DSR support just one retailer's POS data.

 OK before someone get's back to me with "but we have multiple retailers' POS data in our system" , I'll clarify:

  • Having Walmart and Sam's Club data in the same DSR does not count (as the data comes from the same single source, RetailLink) and I bet you are still limited as to what you can report on across them.
  • If you have multiple-retailer's POS data set up in isolated databases using the same front-end... it does not count
  • If you have the data in the same database but without common data standards ... it does not count.
  • If you have the data in the same database but with no way to run analysis or reports across multiple retailers at once... it does not count.

So, yes, a number of CPGs have DSRs that support multi-retailer POS data sources, very, very few (if any?) have integrated that data into a single database with common data standards so they can report and analyze across multiple POS sources at the same time.

Does it matter?  I think so, multi-retailer ability opens up big opportunities around promotional-effectiveness,  assortment planning, supply-chain forecasting (demand sensing) and ease of use.

So, why are we not doing this already?

From a historical perspective, you can track most DSR's back to starting out with a particular retailer's data and supporting CPG sales-teams for that retailer.  The sales-team were the folks with the checkbook and they were not very interested in what the system could do with any other retailer's data.  DSR solutions are still often sold to individual sales-teams which is why CPGs support numerous DSR implementations.

Can these solutions support multiple-retailers - yes - sort of - maybe - probably not.  The key issues to resolve are data-volume, data-standardization, localization and security.

Data Volume

From my previous post (Next-Generation DSRs - data handling) I was stressing how circa 2010 technology was struggling to handle the volume and velocity of data involved in a DSR.  And that was with single retailer solutions.  Newer database applications gives us the capability to maintain or improve performance while handling substantially more data  through columnar, massively parallel and in-memory technology.  I fully acknowledge I may be missing a few ideas on that list, it doesn't matter - the point being that a 10 fold increase in data volume is no longer something to be worried about,  Trade up  to new technology and you can handle it.  

Data Standardization

This is dull, really dull, it's right up there with Data Cleansing (boring, painful, tedious and very, very important).  There is no standard for what data a retailer chooses to share with their CPG suppliers.   There is overlap, yes of course, but no actual standards.  They will:

  • call the same facts (e.g. point of sale units) by different names.
  • report facts in different time buckets (weekly, daily)
  • report facts that are 100% unique to a particular retailer (some of which may be useful)
  • have similar but (subtly different) meanings for what appears to be the same fact
  • not provide key facts that seem essential (like on-hand inventory at stores)

And through all this you are trying to find enough common ground to generate reports and analytics that work across retailers.  I can hear the cries now of "but Retailer-X is completely unique, that won't work for us".  Ignoring for the moment the impossibility of degrees of "unique-ness", they are wrong, this really can be done.  All retailers sell, order, hold inventory and promote (to list but a few things).  What is common between data sources is huge, but it takes real discipline to find the commonality wherever it exists and map it to a single data-structure for reporting/analytic purposes.  And when you do find something unique, that's ok:  map it to a new fact, store it and wait.  Perhaps it's only unique because you haven't seen it in another retailer's data feed... yet.

Bottom line - It's dull (I did warn you about that right?) but it can be done.

Localization

When I'm generating a report for retailer X, they call the Point of Sale revenue fact 'POS Sales', retailer Y calls it 'Point of Sale $', retailer Z calls it 'POS Revenue'.  Internally, and when reporting across multiple retailers,  we call it just 'POS'.  How can we support this?

I've coded custom solutions for this before, it's not that hard, but it strikes me that this is just another example of "language" and if we can have the same application work in English, German, Italian, Spanish and Russian, how hard should it be to translate between variations on the same language.

Security

Is Retailer X allowed to see data from Retailer-Y - no way !  Are the Retailer-X sales-team allowed to see Retailer-Y point of sale data - very probably not.  Are my sales folks allowed to see any competitor sales data provided to category managers  - nope.  Do I want the sales-team to see the profit margin on the products they sell?  (This sounds sensible, but actually some CPGs do not want this.  I guess, if they don't know, they can't tell the customer).

These are all issues with DSR's as they stand today and are all resolved already with solid user account management.  If this process is done well, security is not a problem.  If the processes around security management are sloppy, it's already a problem.  Adding more data into the system really doesn't make a difference one way or another.

Bottom line

If a DSR was designed from scratch to support multiple retailers, it would have one single data model and all new data sources get mapped to this single model.

Localization means that the same report for Retailer-X and Retailer-Y is shown with their own naming preferences.

Security controls who is allowed to see what.

And what's in it for you ?

  • You now have the ability to rapidly leverage learnings (in the form of new analytics and reports) across all retailers and sales teams.
  • As team members move from one sales-team to another they do not need to learn a new system or even necessarily, a new "langauge".
  • You get to maintain, develop, learn and train against  just one system
  • And the really big pay-off is that you can now start to run value-added analytics that require access to multiple retailer's POS data .   Think about significantly enhanced promotional-effectiveness,  assortment planning and supply-chain forecasting (demand sensing)  More on this very soon.

The right tools for (structured) BIG DATA handling - columnar, mpp and cloud - AWS Redshift

Today, I'm coming back a little closer to the series of promised posts on the Next Generation DSR  to look at some benchmark results for the Amazon Redshift database.   Some time ago I wrote a couple of quite popular posts on using columnar databases and faster (solid state) storage to dramatically (4100%) improve the speed of aggregation queries against large data sets.  As data volumes even for ad-hoc analyses continue to grow though, I'm looking at other options.

Here's the scenario I've been working with: you are a business analyst charged with providing reporting and basic analytics on more data than you know how to handle - and you need to do it without the combined resources of your IT department being placed at your disposal.

Previously, (here)   I looked at the value of upgrading hard-drives (to make sure the CPU is actually busy) and the benefit of using columnar storage which let's the database pull back data in larger chunks and with fewer trips to the hard-drive. The results were ..staggering. A combined 4100% increase in processing speed so that I could read and aggregate 10 facts from a base table with over 40 million records on laptop in just 37 seconds.  (I'm using simulated Point of Sale data at item-store-week level just because it's an environment I'm used to and it's normal to have hundreds of millions or even billions of records to work with)

I then increased the data volume by a factor of 10 (here), repeated the tests and got very similar results without further changing the hardware.   The column-storage databases were much faster, scaling well to both extra records (the SQL 2012 column-store aggregating 10x the data volume in less than 6x the elapsed time) and to more facts (see below).

400 million records (the test set I used) is not enormous but it's certainly big enough to cause 99.2% of business analysts to come to a screeching halt and to beg for help.    It's also enough to tax the limits of local storage on my test equipment when I have the same data replicated across multiple databases.

I've been considering Amazon Redshift for some time - it's cloud-based, columnar, simple to set up, uses standard SQL and it enables parallel execution and storage across multiple computers (nodes) in the cloud.

First let's look at a simple test - the same data as before but now on Redshift.  I tested 2 configurations using their smallest available "dw1.xlarge" nodes currently costing $0.85 per hour per node.  These nodes each have 2 processor cores, 2TB of (non SSD) storage and 15GB of RAM.    I'm going to drop the "SQL 2012 Base" setup that I used previously from the ongoing comparison - it's just not in the race.

SQL Server 2012 (with the ColumnStore Index) was the clear winner in the previous test and for a single fact query it still does very well indeed.  The 2-node Redshift setup takes almost twice as long for a single fact, but, remember that these AWS nodes are not using fast SSD storage (and together cost just $1.70 per hour) so 41 seconds is a respectable result.  Note, also, that it scales to summarizing 10 facts very well indeed, taking about 50% of the time that SQL Server did on my local machine.

How performance scales to more records and more facts is key and, ideally, I want something that scales linearly (or better): 10x the data volume should result in no more than 10x the time.  Redshift here is doing substantially better than that - is that suggesting a better than linear scaling ? Let's take a closer look.  

For this test I extended the base table to include 40 fact fields against the same 3 key fields (Item, Store and week).  I then ran test aggregation queries against the full database for 1, 5, 10, 20 and 30 facts

The blue dots show elapsed time (on the vertical axis) against the number of facts summarized in each query for the 2 node setup.

The red dots show the same data but for the 4 node setup.

For both series, I have included a linear model fit and they are very definitely linear.  (R-squared values of 0.99 normally tell you that you did something wrong, it's just too good, but this data is real.)  However, there appears to be a substantial "setup" time for query processing:- 31.943 seconds in the case of the 2 node system and 10.391 seconds for the 4 node system.  These constants are the same whether you pull 1 fact, 5 facts or 30 on this basic aggregation query.  Now, as all these queries join to the same  item, and period master tables and aggregate on the same category and year attributes from those tables that should not be a big surprise.  Change that scope and this setup time will change too.  (more on that later)

Note also that as the number of nodes was doubled,  processing speed (roughly) doubled too.

Redshift is a definite contender for large scale ad-hoc work  It's easy to setup, scales well to additional data and when you need extra speed you can add extra nodes directly from the AWS web console.  (It took about 30 minutes to resize my 2 node cluster to 4 nodes.)  

When the work is done, shut down the cluster, stop paying the hourly rate and take a snapshot of the system to cheap AWS S3 storage.   You can then restore that snapshot to a new cluster whenever you need it.

Is it the only option?  Certainly not, but it is fast, easy  to use and to scale out.  That may be hard to beat for my needs, but I will also be looking at some SQL on Hadoop options soon.

Data Visualization - are pie-charts evil ?

I'll be speaking next week at the Supply Chain Management Conference at the University of Arkansas on how data-visualization enables action.   

Good visualization is easy. Unfortunately, building bad visualizations that are hard to use, easy to misunderstand and that obscure and distort the data are even easier - many analysts can do it without trying.

In honor of the event, I'm resurrecting a post I created a couple of years ago "Are pie charts evil or just misunderstood".  I wrote this around the time I was moving away from a trial and error approach  (and 20 years of trial and error effort does get you cleaner visuals) to attempting to understand why some visuals so clearly work better than others.  

It turns out that there are some great frameworks to help in building better visuals.  Join me next week and we'll talk about human graphical perception, chart junk and non-data ink.

Enjoy !

Data Visualization - enabling action

I'll be speaking next week at the Supply Chain Management Research Center Conference at the University of Arkansas on how data-visualization enables action.

The basic premise (and one I firmly believe) is that the hardest part of any analytic project is not defining the problem, doing the analytics or finding the "solution", it's enabling action.

Far too many otherwise excellent analytic projects, tools and reports go unused because the results are presented in a way that is somewhere between difficult-to-understand and incomprehensible.

Managers typically do not have the time to just figure it out or double check their understanding, or re-work the results to something they can work with.

By making your analytics easy to consume (through good visualization practice) you make it possible for decision-makers to find what is important, understand it correctly and make good decisions, quickly.

Frankly many analytics providers don't try very hard to make their results easy to consume and their outputs are confusing, hard to use, easy to misunderstand and a long, long way from enabling decisions.

For those that do try, there is a tension between making things look "cool" or "interesting" and having them function well. Ideally we want both, but very few examples deliver well on both fronts. Indeed, a lot of the attempts to provide interest seem to be designed to obfuscate or distort meaning.

Here are some examples I plucked from a leading visualization vendor's web site. Each and every one of these charts is difficult to read because of limitations in our visual perception. We'll talk more about that in the conference next week.

And trying to make charts more interesting/attractive/eye-catching typically makes things worse. This "Funnel Chart" (below) is hilarious !. It's being terribly misused and gets almost everything wrong. I defy use to use this and make sensible decisions.

  • Color serves no purpose
  • It's very unclear whether values are represented by length, area or volume (thank goodness they included numbers)
  • The top value is (visually) about 100 times bigger than the bottom one but actually less than 5 times bigger in value.
  • I need another legend to tell me where all these regions are
  • Why, exactly, is it a funnel ? What does that imply? The NorthEast feeds the South which feeds into Central...
  • It has no contextual information. Perhaps Northwest is the smallest because that is our smallest market ?

Here's an example we will be working with in the conference . It's very hard to read, slow to use, easy to make mistakes with and distinctly over-dressed.

And exactly the same data once it's been stripped bare (below). It's now easy/quick to read, practically error-proof, has no distracting "chart junk" and has contextual data (budget) to understand what "good" is.

My interest in visualization is in enabling action from my analytic work. As a consultant, you may think that I get paid whether a client implements my work or not. That may be true, but I like to get paid more than once by the same client.

If you're going to be at the conference next week, drop by and see me: Supply Chain, Analytics and Visualization are among my favorite discussion topics.

I'll be posting more on this over the next few months but if you're looking for more right now, here are some excellent resources:

Stephew Few's blog, Visual Business Intelligence

Kaiser Fung's blog, Junk Charts

Nathan Yau's blog, Flowing Data

Next Gen. DSRs - Scale Out !!

Last week, I posted my thoughts on how new technology enables a simpler and faster database to support your DSR applications. Next Generations DSRs (data handling) Here's a highlight from his post Thoughts on AWS Redshift :

if you can add nodes and scale out to improve query response then why not throw hardware at performance problems rather than build a fragile infrastructure of aggregate tables, cubes, pre-joined/de-normalized marts, materialized views, indexes, etc. Each of these performance workarounds are both expensive to build and expensive to operate.

He goes on to talk about why scale-out has not been generally adopted and how Amazon Redshift changes the game by making it easy to acquire and release processing power on demand.

The answer does not have to be Redshift, perhaps it's Impala or Hekaton or... whatever.  Bottom line for me is that new technology enables DSR's that are simpler and faster and that creates a fundamental shift in system capability.

FYI - I have done some DSR-scale testing with Redshift and the results were very impressive.  More on that soon.

Next-Gen. DSRs - data handling

This post continues my look at the Next Generation DSR.  A DSR (Demand Signal Repository) holds data, typically Point of Sale data,  and that data volume is big,  not Google-search-engine big, but compared to a CPG's transaction systems, it's huge.  Furthermore, the system is required to rapidly load large quantities of new data, clean it, tie it into known data dimensions and report against it in very limited time-frames.

But scale and performance needs aside, why have most (though not all) CPGs chosen to buy rather than build the capability?  After all, it is primarily a business-intelligence/database application and most businesses run a number of them. One key reason is that it's challenging to get business reporting performance at this data scale from existing technology.

This post looks at how this problem gets solved today and how newer database technology can change that landscape.


Handling the data volume (1) Cube it

One existing approach to the problem is to use 2 databases, one to store the detailed granular data in relational form and another with data "cubes" containing pre-calculated summaries (aggregations) of the relational data.  Most uses of the data will involve working with summaries so you can save users a lot of time by pre-calculating them.

Once built, analyzing data within a cube is fast but you do have to decide a number of things up-front to populate the cube.
  • what aggregation levels do you need e.g.:
    • county, state, region for store locations
    • brand, pack-type, category for product
  • what facts do you want included e.g.
    • pos $ and units for the sales cube
    • on hand and in-transit inventory and forecasts for the supply chain cube.
The more data (and aggregation levels) you add to the cube the longer it will take to build; to take hours is normal, days is not unknown.  Additionally, once a cube is built, it is essentially disconnected from any changes in the underlying database until it is next rebuilt.  If your master data is assigning the wrong category to a product, fixing it won't help your reports until you rebuild that cube.

Handling the data volume (2) Hyper-complex data models

Logically we can do everything we want in a standard relational database like SQL Server or Oracle:  the data structures are not actually that complex: we need master/lookup tables for product, location and time and one fact table to store all the information collected for each product, location and time bucket (POS sales, inventory, store receipts etc.)  That's just 4 tables.  Yes we could get more complex by adding other data sources with additional dimensions but it would still be a simple structure.   Build this in your favorite relational SQL database and it will work but but it is most definitely not fast.  

To get speed in these systems, developers have created some very complex, novel but nonetheless effective data-models.  (Complex enough that an unwary developer taking their first look inside could be forgiven for a little lot of bad language.)

These data structures enable rapid reporting with no intermediary steps, no aggregations, no cubes.  Once the data is loaded it is ready to go.  Re-load some POS data or change a product category and it is immediately reflected in the next report.  Now that is very cool, and for analytic or reporting projects where you need ad-hoc aggregation against product groups that did not exist this morning, and were not 100% correct until the 5th interation sometime this afternoon, a very important feature

The complexity of the data model comes at a price though.  
  • You will probably only ever use the Business Intelligence tool supplied with the DSR.  This tool has have been extensively configured, customized, or even written, to handle the complex nature of the data structure it sits upon,  Putting another tool on top is a huge investment and would most likely be need an additional, simpler database, either (slow) relational or (slow to build) cubes that would be populated from the main DSR occasionally but otherwise disconnected from the data source and subsequent changes.  That rather defeats the point, doesn't it?
  • These models spread data across a multitude of tables in the database.  Not a big problem for most reporting which aggregates each fact table to the desired level (e,g. brand by country) then stitches together the relatively small result sets for a human-readable report.  For predictive analytics however, we want the lowest level of data and need all of the facts in the same table before we can start modeling.  Sadly, the database just doesn't store it that way, so every analytic project starts with a complex data-manipulation project.  

Handling the data volume (3) Next Generation

Database technology is evolving rapidly and I believe we are at the point that it can now provide good performance with no pre-aggregation of data, no cubes and a data-model that is easily understood so you can bring your own Business Intelligence tools or analytic apps to bear on it.

I'm an analyst not a database expert so I would not want to put too much money on which of the competing approaches will win out longer term but I think the key words to follow here are "columnar", "massively parallel", "in memory" and maybe, perhaps, possibly..."Hadoop".

Columnar databases change the way that data is stored in the database.  This makes them relatively slow for transaction updates but dramatically faster for report-style aggregations even with a simple data-model.  (See my previous post here for an example: )

Existing systems typically run on a single server.  Want it to run faster? Then you need to buy a bigger server.  MPP (Massively Parallel Processing) systems use clusters of hardware, dividing up the work across multiple, relatively cheap, servers (nodes).  If you need more performance add more nodes to the cluster. Do this with a cloud-based service and you can flex the number of nodes in your cluster to meet processing demand:  double-up as needed for your data-load or your predictive-model run.

In memory databases deliver speed increases by pulling the data off disk storage and loading the whole thing into memory (and accessing data in memory is certainly much faster than reading it off disk.).  I've not tried one of these yet and I would be interested to hear comments from those that have.  It sounds good but I don't think the price-point is yet where I could justify the use.  10TB of RAM is certainly uch cheasper than it was 10 years ago, but my gut-feel is that the economics will suggest a hybrid RAM and disk/SSD model for some time to come.  There is a thoughtful blog post on SQL Server's new in-memory offering, including a few limitations, here.

Finally, let's talk Hadoop.  I know it's "sexy" and often appears in the same sentence as "Big-Data" but I'm not yet convinced that it's appropriate for this use where we want rapid response on a very large number of typically small and ad-hoc queries.  I could be wrong though,  a friend and colleague that I respect has recently moved to Cloudera after a lifetime of SQL/Oracle and is very excited about the possibilities using Hadoop/Hbase/Impala.  Looking at these benchmark results comparing a number of Hadoop based systems to Redshift (columnar, mpp) he may well have a point.   I will try to keep an open mind.

Are there other options? You bet!  A number I have deliberately ignored and, I'm certain, plenty out I have not heard of, but this set will do the job and if another technology will do it even better now or in 5 years time, great !  The bottom line is that database speed and storage capability is growing faster than the amount of data you want in your DSR.   We need to take advantage of it,

So what does this get us ?

Using database technology to increase speed and to get a simpler data structure is a big win.  Simpler, faster systems come with less maintenance, lower learning curves, more productivity and, I strongly believe, the capability for better insights.   Slow response times are an "insight killer" (more on this in an upcoming blog post).

The simpler data structure means that it's relatively easy to swap out the front-end for the BI, analytics or visualization tool of your choice.  Want that data in Business Objects or Tableau?  No problem!  Connect from R/SPSS/SAS/RapidMiner?  Absolutely!

What does this mean for DSR vendors ?

  • The ability to handle DSR-sized data volume is no longer a competitive advantage.
  • If it's easy to set up any new BI, visualization or analytic tool against the database providing the best" user interface is of limited value.
  • Rapidly loading new data is important
  • Providing clean data is important (and often overlooked)
  • Helping users navigate the data ocean to find the things that must be done via process specific exceptions and workflow is important.
  • Helping users drive better decisions by embedding the analytics against the right data in real-time... now that's really important.

The Next-Gen. DSR

CPGs have had access to Point of Sale (POS) data now for many years and many of them use a Demand Signal Repository (DSR) to gather, clean and report on this data.  (Actually most of them use a number of DSR's and even when they do have just one, still can't handle truly cross-retailer analytics).  

I've been involved with a number of these systems as a software-buyer, a system-administrator, consultant and most recently, leading the analytic development at Orchestro.  

There are some excellent tools available and, in their current form, they can help you drive both additional revenue and reduced costs when used well.  However, in my experience many of these tools have been sold in under the guise of "saving time" through reporting automation.  That's valuable, but it's not "finding a new sales opportunity" valuable.

I think we are still in the infancy of DSR development: systems are operating at the limits of the technology they were built on and necessary trade-offs mean that being good at one thing (e.g. speed) makes it more challenging to be good at others (e.g. analytics).

The next generation of DSR can be dramatically more effective.  In particular, it will be:

  • much faster while handling much more data
  • much easier to use
  • easy to load with CPG's data
  • easily integrated with additional data feeds (weather, economic time-series, google-trends, twitter feeds, geo-demographic data)
  • truly cross-retailer
  • easily integrated with your chosen BI, visualization and ad-hoc analytics tools.
  • couple rapid data-handling with effective predictive analytics to drive discovery, insight and better decisions.
I'm not going to tell you that these ideas are new (or mine).  This list provides a very high standard and against it, DSRs have consistently under-delivered.

What I am saying, is that the technology now exists to deliver on the promise.

Over this upcoming series of posts we'll look at developments in database technology, analytics and visualizaton that will enable DSR 2.0. (Or should that be DSR 3.0?).  Sign up for the blog feed and make sure you don't miss it.



Back to blogging on "Better Business Analytics"

It's been quite a while, just over 12 months in fact since my last blog post.  In that time, I've been hard at work developing analytic applications for the Orchestro DSR.  (Orchestro's off-shelf alerting tool is especially cool and something I am very proud of contributing to).    I enjoyed my time at Orchestro, they're a good team and have big plans, but one key thing I found out about myself is that I prefer working real-life problems to developing software for someone else to have all the fun :-)

So, I'm now back full-time on consulting and I will occasionally blog on topics of interest to me.   Expect to see more soon on:

  • Next-generations DSRs (Demand Signal Repositories)
  • Retail supply-chain analytics
  • Handling (BIG-ish) data for analytics
  • The right tools for the job (Predictive Analytics, Business Models, Optimization)
  • Some more thoughts on store-clustering
  • Inventory modeling at retail (and why it's different, again)
  • Order forecasting using POS data
  • Further thoughts on SNAP and other ignored demand drivers
  • and if there is something you would like to hear more on ... just drop me a line.



Business Analytics - finding the balance between complexity and readability

In this blog I try to present analytic material for a non-analytic audience.  I focus on point of sale and supply chain analytics: it's a complex area and frankly, it's far too easy whether writing for a blog or presenting to a management-team to slip into the same language I would use with an expert.  

So, I was inspired by a recent post on Nathan Yau's excellent blog 

FlowingData

 to look at the "readability" of my own posts and apply some simple analytics to the results.

I've followed Nathan's blog for a couple of years now for the many and varied examples of data-visualization he builds and gathers from other sources. One that particularly caught  my eye was this one published by the  Guardian just before the recent State of the Union address in the United States (click to enlarge).

The Guardianplotted the Flesch-Kincaid grade levels for past addresses. Each circle represents a state of the union and is sized by the number of words used. Color is used to provide separation between presidents. For example, Obama's state of the union last year was around the eighth-grade level, and in contrast, James Madison's 1815 address had a reading level of 25.3.

Neither the original post nor Nathan's go into much detail around why the linguistic standard has declined.  Within this period, the nature of the address and the intended audience has certainly changed.   Frankly, having scanned a few of the earlier addresses I think we can all be grateful not to be on the receiving end of one of them.

 So, 

I was inspired to find out the reading level of my own blog

.  It's intended to present analytic concepts to a non-analytic audience.  I can probably go a little higher than recent presidential addresses (8th-10th grades, roughly ages 13-15) but I don't want to be writing college-level material either.

All the books my kids read are graded in this (or a very similar) way but I had never thought about how such a grading system could be constructed.   The

Flesch-Kincaid

grade level estimate is based on a simple formula:


0.39 \left ( \frac{\mbox{total words}}{\mbox{total sentences}} \right ) + 11.8 \left ( \frac{\mbox{total syllables}}{\mbox{total words}} \right ) - 15.59

That's just a linear combination of : 

  • average words per sentence;
  • average syllables per word
  • a constant term.

In fact (though I have not yet  found details of how it was constructed) it looks to be the result of a regression model.  (Simple) data science in action from the 1970's.

Note that Flesch-Kincaid says nothing about the length of the book or the nature of the vocabulary it's all down to long sentences and the presence of multi-syllabic words.

(BTW - the preceding sentence has a Flesch-Kincaid grade score of 

13.63,

calculated with this online

utility

).  Now that's pretty high, worthy of an early 1900's president and (supposedly) understandable by young college students.    The sentence is longer than typical; 31 words vs. my average of 18 (see below) and words like "vocabulary", "sentences" and "multi-syllabic" are not helping me either.

Approach

I could have used copy/paste into the online utility I used above, recorded the results in a spreadsheet and pulled some stats from that. That would work, but if I ever want to repeat the exercise or modify it, perhaps to use a different readability index, I must do all that work again.   At the time of writing, there are currently 44 published posts on this blog - there must be a better way.

Actually there are probably many better ways but as I also wanted to flex some

R

-programming muscle I built a web-scraper in R to do the work for me and analyze the results (more on this in a later post).

Results

Let's start with some simple summaries of the results I collected.

Histograms showing the % of posts from this blog (prior to 2/14/13)

, the average (mean) value shown in red.

There is some variety in the grade reading level indicated by Flesch-Kincaid for my blog posts, averaging around 10 but ranging from 7 through 14.  I average about 750 words, but occasionally go much longer and have a number of very short "announcement" style posts.  Average words per sentence of 18.

OK, so now I know, but is that good?  I don't know that I have a definitive source but according to at least one 

source

  the target range on  Flesch-Kincaid for Techical or Industry readers is 7-12, so I'm feeling pretty good about that.

I did wonder whether there was any other, hidden, structure to the data though.  I know the equation is based on words per sentence and syllables per word so there is no point looking at those, obviously I'll find a relationship.   But is my writing style influenced by anything else?

Flesch-Kincaid grade level vs. the number of words by post on this blog.

Other than

 a h

andful of long posts that rate lower in the range 8-10,  I don't see much going on here.

Flesch-Kincaid grade level vs. the publication date by post on this blog. 

 The size of each post (in words) is shown by the area of each point, color is used purely to help visually differentiate each of the points.  Apart from a couple of recent "complex" posts  this does seem to be showing a trend, so I added a regression line and labeled the more extreme posts.  Point (b) is a very short "announcement" style post (you can hardly see the point at all) and I could probably ignore it completely.  Point (e) is a more fun piece I did around using pie-charts that's probably not very representative of the general material either.

If you want to compare readability for yourself here are the top (and bottom) posts ranked by Flesch-Kincaid grade level

Rank

Post

 Flesch-Kincaid grade level

words

sentences

1

Analytic tools "so easy a 10 year-old can use it" 

13.3

784

33

2

Point of Sale Analytics - newsletter released 

13.1

82

4

3

Point of Sale Data – Category Analytics 

12.8

676

29

4

How to save real money in truckload freight (Part I) 

12.8

723

31

5

The Primary Analytics Practitioner 

12.7

541

29

6

Reporting is NOT Analytics 

12.4

891

43

7

Point of Sale Data – Sales Analytics 

12.1

478

24

8

Data handling - the right tool for the job 

11.9

762

38

9

Data Cleansing: boring, painful, tedious and very, very important 

11.8

297

16

10

Point of Sale Data – Supply Chain Analytics

11.6

958

41

35

The right tools for (structured) BIG DATA handling

  9.0

1878

114

36

Better Point of Sale Reports with "Variance Analysis": Velocity...

  8.9

1264

78

37

Better Point of Sale Reports with Variance Analysis (update)

  8.5

177

10

38

Better Business Reporting in Excel - XLReportGrids 1.0 released

  8.4

70

5

39

What's driving your Sales? SNAP?

  8.3

651

42

40

Do you need daily Point of Sale data?...

  8.2

1395

83

41

SNAP Analytics (1) - Funding and spikes.

  8.1

531

32

42

SNAP Analytics (2) - Purchase Patterns

  7.9

773

44

43

Business Analytics - The Right Tool For The Job

  7.6

483

36

44

Are pie charts truly evil or just misunderstood ?

 7.1

1097

70

Conclusions

It appears that my material is (largely) written at a level that should be accessible to the reader.

 And I am using more readable language in recent blogs which sounds like a good thing.

But there remains a key question for me that these stats can't really answer.

 Am I getting better at explaining the 

complex (my goal) or just explaining simpler things ? What do you think ?

In case you are wondering, this post has a Flesch-Kincaid grade level of about 8.  So if you can follow the "State of the Union" address you should have been just fine with this.

The right tools for (structured) BIG DATA handling (update)

A couple of weeks ago, I ran a somewhat rough benchmark to show just how much faster large database queries can run if you use better tools.
The right tools for (structured) BIG DATA handling  Here's the scenario: you are a business analyst charged with providing reporting and basic analytics on more data than you know how to handle - and you need to do it without the combined resources of your IT department being placed at your disposal.  Sounds familiar?
I looked at the value of upgrading hard-drives (to make sure the CPU is actually busy) and the benefit of using columnar storage which let's the database pull back data in larger chunks and with fewer trips to the hard-drive.  The results were ..staggering.  A combined 4100% increase in processing speed so that I could read and aggregate 10 facts from a base table with over 40 million records on my laptop in just 37 seconds.

At the time I promised an update on a significantly larger data-set to see whether the original results scaled well.  I also wanted to see whether query times scaled well to fewer facts.  Ideally querying against 5 facts should take about 50% of the original 10 fact aggregation queries.



Test environment

My test environment remains the same,  a mid-range laptop, quad-core AMD CPU, with 8 GB of RAM running Windows 7 (64 bit) and with a relatively cheap (<$400) fast solid-state drive.

This time though I increased the data-quantity 10-fold to 416 million records

Then I ran the same aggregation SQL to pull back summaries of 10 facts from this table.
SELECT Item.Category, Period.Year, SUM(POSFacts.Fact1) AS Fact1, SUM(POSFacts.Fact2) AS Fact2, SUM(POSFacts.Fact3) AS Fact3, SUM(POSFacts.Fact4) AS Fact4, SUM(POSFacts.Fact5) AS Fact5, SUM(POSFacts.Fact6) AS Fact6, SUM(POSFacts.Fact7) AS Fact7, SUM(POSFacts.Fact8) AS Fact8, SUM(POSFacts.Fact9) AS Fact9, SUM(POSFacts.Fact10) AS Fact10 FROM Item INNER JOIN POSFacts ON Item.ItemID = POSFacts.ItemID INNER JOIN Period ON POSFacts.PeriodID = Period.PeriodID GROUP BY Item.Category, Period.Year

I repeated this timed exercise 5 times for:
  • standard (row-based) SQL Server 2012
  • SQL Server 2012 with the ColumnStoreIndex applied
  • InfiniDb (a purpose built column-store database)

Finally I ran it again on each configuration but just summarizing for 1 fact:
SELECT Item.Category, Period.Year, SUM(POSFacts.Fact1) FROM Item INNER JOIN POSFacts ON Item.ItemID = POSFacts.ItemID INNER JOIN Period ON POSFacts.PeriodID = Period.PeriodID GROUP BY Item.Category, Period.Year

Results

Before we get to the query timing let's look at what was happening to my machine while queries were running.  

The first screen-shot below (click to enlarge) was taken while running queries with base SQL-Server (no column store indexes).  You can see that the CPU is just not busy.  In fact it's averaging only 30%  and that's with the solid-state disk installed.  The drive is busy, but only serving up about 50MB/s.  (I say "only" but of course that's much better than the old hard-drive.)
System resources running base SQL Server query

The next screenshot shows system resources while running a query with the ColumnStore Index applied.  The CPU is now busy on average 77% of the time and peaking at 100% on occasion.  The disk utilization chart may be misleading because it's now plotted on a much larger scale but the same disk is now hitting 200MB/s.  I think we can expect great things !
System resources running SQL Server with the ColumnStore Index


So, on to the timed results.  I ran each scenario 5 times and all results were very consistent, within +/-10% of the average.
Test results against 416 million records

Again, SQL Server 2012 with the Columnstore Index is the clear  winner.  Just 217 seconds to aggregate all 10 facts and, amazingly, just 21 seconds to aggregate 1 fact across the same 416 million records.  InfiniDb takes over twice as long against 10 facts and does not scale nearly as well with the single fact query.


Now compare with the results we got last time to see how well each database scaled with the increase in data volume.



Data volume increase by a factor of 10 and:

  • InfiniDb and base SQL server both increased query time by about a factor of about 10, roughly in proportion.
  • SQL server with the ColumnStore index only increased by a factor of 5.9 !   


To be fair I am comparing the (free) community edition of InfiniDB against (decidedly not free) SQL Server and neither tool is really intended to be run on a laptop.  But if you need rapid aggregation of data and do not have access to a cluster of commodity servers - it is clear that columnar storage helps you get that data out fast.

The other thing you may want to consider is that it took me substantially less time to load the data into InfiniDB (sorry I did not time it but we're talking minutes not seconds), and building that ColumnStore index in SQL actually took longer than the base query ~ 4500 seconds.  You may not want to go to this trouble if you just need a couple of quick aggregations.

Remember also that the table with the ColumnStore index is read-only after the index is applied. Want to make some updates?  That would be easier in InfiniDB.

Conclusions

Ultimately I'm not trying to sell you on either option, but if you have a lot of structured data to feed your analytic project, a columnar database may well be the way to go right now.  






The right tools for (structured) BIG DATA handling

Here's the scenario: you are a business analyst charged with providing reporting and basic analytics on more data than you know how to handle - and you need to do it without the combined resources of your IT department being placed at your disposal.  Sounds familiar?

Let's use Point of Sale data as an example as POS data can easily  generates more data-volume than the ERP system.  The data is simple and easily organized in conventional relational database tables -  you have a number of "facts" (sales-revenue, sales-units, inventory,  etc.) defined by product, store and day going back a few years and then some additional information about products, stores and time stored in master ("dimension") tables,

The problem is that you have thousands of stores, thousands of products and hundreds (if not thousands) of days - this can very quickly feel like "big data".    Use the right tools and my rough benchmarks suggests you can not only handle the data but see a huge increase in speed.


Let's see just how big this data could be:
If on each day, you collect 10 facts for 1,000 products at 1,000 stores that would be 10 million facts every day (10 x 1000 x 1000) .  Look at it annually , that's 3.65 billion facts every year.  
Is it big compared to an index of the world-wide-web? No it's tiny, but in comparison to the data a business analyst normally encounters it's not just "big" its "enormous".  Just handling basic data manipulation (joins, filters, aggregation etc,) is a problem.  Trying to handle this in desktop tools like Excel, or Access is completely impossible.

As usual, there are better tools and worse tools - you must use a database, but even with a conventional server-based database like Microsoft's SQL*Server, you may have problems with speed.   I wanted to see how speed is impacted, firstly by upgrading the hard-drive and second by using two varieties of column-store databases.  

A couple of relatively simple changes and bench-marking shows a 4100% increase in speed.  If a 4100% increase does not indicate to you that there may be a better tool for the job, I don't know what will.

Running analytics against this data (once delivered from a tool that has joined, filtered and aggregated) appropriately is another challenge that we will get to in a later post.

First a little disclosure: I am first and foremost an analyst: my technologies of choice are statistics, mathematics, data-mining, predictive-modeling, operations-research,... NOT databases and NOT hardware-engineering. To feed my need for data I have become adept in a number of programming languages and relational database systems. I'm most comfortable in SQL Server just because I'm more familiar with that tool though I have used other databases too. Bottom line, I'm a lot better than "competent" but I am not "expert".

Test environment

I built a test database in SQL Server 2012 with 4 tables in a simple "star schema": 1 "fact" table with 10 facts per record and 3 associated "dimension" tables as follows:


Inline image 1

The data itself is junk I generated randomly in SQLServer with appropriate keys and indexes defined.  

This represents approximately 8 GB of data.  Not enormous (and as you will see later) perhaps not big enough to test one of the options fully, but big enough to get started and much bigger than many analysts ever see.

I'm testing this on a mid-range laptop, quad-core AMD CPU, with 8 GB of RAM running Windows 7 (64 bit) that cost substantially less than $1000 new. You probably have something very like it sat on your desk.

I then wanted to see how long it would take to take to perform a simple aggregation. My test SQL (below) joins the fact table to both the product and period dimension tables then adds each fact (1 thru 10) for each year and brand. Not very exciting perhaps but a very common question "what did I sell by brand by year".

SELECT Item.Category, Period.Year, SUM(POSFacts.Fact1) AS Fact1, SUM(POSFacts.Fact2) AS Fact2, SUM(POSFacts.Fact3) AS Fact3, SUM(POSFacts.Fact4) AS Fact4, SUM(POSFacts.Fact5) AS Fact5, SUM(POSFacts.Fact6) AS Fact6, SUM(POSFacts.Fact7) AS Fact7, SUM(POSFacts.Fact8) AS Fact8, SUM(POSFacts.Fact9) AS Fact9, SUM(POSFacts.Fact10) AS Fact10 FROM Item INNER JOIN POSFacts ON Item.ItemID = POSFacts.ItemID INNER JOIN Period ON POSFacts.PeriodID = Period.PeriodID GROUP BY Item.Category, Period.Year

Each run was repeated 5 times and the elapsed time for each run averaged to get the results shown below.  While there was variation in run times, this was typically within about 10% of the average for each test.

Baseline

This is my starting point: SQL Server 2012 in its regular row storage mode.   


Faster Storage

This database query is going to need a lot of data from the hard-disk; actually almost all the data in the database.  The hard-drive that came with my laptop was not especially slow but it was clear that it was a bottleneck on my system.  While running this query the standard disk could not deliver data fast enough to keep the CPU busy - in fact the CPU was rarely operating at even 50% capacity.   An option to upgrade the hard-drive seemed to be in order.  ($350 for a 480 GB Solid State Disk).

SQL Server 2012 ColumnStore Indexes

SQL 2012 has a new feature called a Columnstore Index.  Per the Microsoft website:
 "An xVelocity memory optimized columnstore index, groups and stores data for each column and then joins all the columns to complete the whole index. This differs from traditional indexes which group and store data for each row and then join all the rows to complete the whole index. For some types of queries, the SQL Server query processor can take advantage of the columnstore layout to significantly improve query execution times... Columnstore indexes can transform the data warehousing experience for users by enabling faster performance for common data warehousing queries such as filtering, aggregating, grouping, and star-join queries."
To put that in plainer English - for data warehousing applications (like reporting and analytics) a columnar database structure can pull its data with fewer trips to the disk - and that's faster, potentially a LOT faster.  (By the way if you want your database to support a transactional system where you will repeatedly be hitting it with a handful of new records or record changes - this could be an excellent way to slow it down  )

Now adding a ColumnStoreIndex does take a while but it's not exactly difficult.  It's just a SQL statement that you run once:
CREATE NONCLUSTERED COLUMNSTORE INDEX [ColIndex_POSFacts] ON [dbo].[POSFacts] ([Fact1],[Fact2],[Fact3],[Fact4],[Fact5],[Fact6],[Fact7],[Fact8],[Fact9],[Fact10])WITH (DROP_EXISTING = OFF) ON [PRIMARY].
Note: Once the ColumnStoreIndex is applied the SQL Server table is effectively read-only unless you do some clever things with partitioning.  For one-off projects this doesn't matter at all of course.  For routine reporting projects you may need a DBA to help out.

InfiniDB columnar database

Columnar databases are not really "new" of course, just new to SQL Server so I also wanted to test against a "best of breed", purpose-built columnar database.  

Why Infinidb?  From my minimal research it seems to test very well against other columnar databases, it's open source (based on MySQL), will run on Windows and comes with a free community edition.  I actually found the learning curve relatively simple, in fact, as Infinidb handles it's own indexing needs it's perhaps even simpler than SQL Server .  Frankly, the hardest part was remembering how to export 40 million records neatly from SQL Server so they could easily be read into InfiniDB using their (very fast) data importer.

The Results

Here are the (average) elapsed times to run this query under each disk and database configuration.  



So the basic SQL Server 2012 configuration on a regular hard-drive took... 1,535 seconds to run my query.  That's over 25 minutes.  I can drink a lot of coffee in 25 minutes.

Upgrade to the Solid State Disk (SSD) and it runs 460% faster in 5 minutes and 32 seconds.  Now understand that my laptop does not use the fastest connection to this SSD, it's spec says it can handle 2.5 Gb per second.  I believe newer laptops run at 6 Gbps.  That being said at least now the quad-core CPU was being kept busy.

If instead of upgrading the disk we add a ColumnStoreIndex to the fact table, we do even better reducing from 1,535 seconds to 126 - that's over 1200% faster !

So which option should we use?   Both of course!  I can now run a query that used to take 25 minutes in 37 seconds.  That's 4100 % faster than when I started.

Now let's take a look at that InfiniDb number.  (I did not test with InfiniDB before swapping out the hard-drive so I only have data for it on the SSD.)   Surprisingly it was not quite as good as the SQL Server speed with the Columnstore Index .  I talked to the folks at Calpont that develop InfiniDB and they kindly explained that a key part of their optimization splits large chunks of data into smaller ones for processing.  Sadly my 41 million record table was not even big enough to be worth splitting into 2 "small" chunks so this particular feature never engaged in the test.  Still it's almost 3000% faster than base SQL even on this "small" dataset and the community edition is free.  

Based on the success of this test I think it's time to scale up the test data by a factor of 10 - watch this space.
(Check out the following update post for more details.)

Conclusions

My test-bed for this  benchmark was a mid-range laptop with a few nice extras  (more RAM, solid state disk and 64 bit OS) but certainly not an expensive piece of equipment and it managed to handle an enormous amount of data with very little effort.  This opens up possibilities for analyzing and reporting on much more data than was possible previously on your desk.

The implications are not just relevant to desktop tools though or to tools we think of as databases.  Numerous other tools now claim to handle data storage in columnar form (see Tableau and PowerPivot for Excel).

Is this the best tool for the job?  Perhaps, perhaps not: there is an enormous amount of activity and innovation in the database space right now and many. many other software providers.  It's certainly a lot faster for this specific purpose and a major step forward over more traditional approaches.

Look hard at columnar databases to speed up your raw data processing and don't spend any longer waiting on slow hard-drives.   

Business Analytics - The Right Tools For The Job

Whether your analytic tool of choice is Excel or R or Access or SQL Server or ... whatever,  if you've worked a reasonable range of analytic problems I will guarantee that at some point you have tried to make your preferred tool do a job it is not intended for or that it is ill-suited for.  The end result is an error-prone, maintenance nightmare and there is a better way.

We all know when we are pushing it too far - the system starts to "creak".  Symptoms vary but include at least some of the following:
  • Model calculations run slowly (if they complete at all).
  • Model calculations give, apparently, inconsistent results.
  • Making minor changes becomes a major headache
  • Errors ("bugs") are routine.  When you need to do a demo, you pray first.
  • You have learned to avoid certain operations because the likelihood of success is slim.
  • If you must come back to your model/application after 6 months to update it, you feel physically ill.
  • and perhaps most importantly, you are far from sure that using your model generates the right results.
This isn't just an academic issue or one of preference.  Poor models can cost real money in lost opportunity or bad decisions that get implemented.  (See this post for a few examples)

So why do we do this ?  I think it's because for many analysts their tool box looks like this.

Why is their toolbox so empty?  In some cases, corporate IT restrictions may make it very difficult to  acquire/install the right tools;  I've been there, it's a real challenge.

For many people though, it just feels easier to take a tool they know well and try and "make it work" than to learn something new.  That's rather like thinking  "I need to chop down a tree... I'll sharpen my hammer" 

Last week, I gave you my nominations for "The Worst use of Excel Ever!"  I could easily cite similar abuses for other tools and over the next few months I will.   

This blog post is the first in a series around using "The Right Tool For The Job".   I'm going to encourage you to add a few more tools to your analytic tool-box and learn to wield them effectively. (Or, alternatively, to at least recognize when you need someone who can do that for you.)  Do you need: 
  • A application programming environment ? 
  • A database (of varying capability, Access, SQL or perhaps a newer column storage databases) ?
  • A reporting tool ?
  • An optimization modeling language ?
  • A visualization tool ?
  • A statistics or data-mining package ?
  • A simulation package ?
Which tools do you think are the most mis-used?  What skills/tools should a business analyst consider adding to their toolbox ?  

Business Analytics - The Worst Use of Excel ever ?


Excel is a great tool and I use it a lot.  It's available on almost every business user's desktop and it's highly extensible (with some sensible design) through add-ins and programming but it can't do everything; push it too far and the results can be nasty.  

Here are my nominations for "The Worst Use of Excel ever" awards.

  1. Entire applications built in Excel/VBA.  I'll admit it, I have done this: it's expedient and for prototyping it can work effectively, but the more you try to lock down Excel to behave as an application (rather than a general purpose spreadsheet) the more problems you encounter.  At some point you need to crank up a real programming environment with purpose built components, even if it's only to build an Excel add-in (like XLReportGrids)
  2. Surveys conducted through Excel/Email.   Build a survey template, email it out to 200 folks and get back...junk you can't use unless you manually sift through each response.  (Yes, I know you can try to lock down the survey spreadsheet, but you can't stop stupidity.  People will copy it, change it, enter incomplete records and it will never be a good substitute for direct entry to a database through a form that handles proper validation.)
  3. Trying to join multiple "tables" by extensive use of VLOOKUP functions.  Judicious use of VLOOKUPs actually extends your capability substantially and can help maintain data integrity rather than duplicate data... but, Excel is not a database.  VLOOKUP is very slow compared to a database join and do you really want to check that the right function is defined in every row?   What happens when I need to add a few records?  Can you make sure that the calculation copies down correctly?
  4. Using Excel to edit database tables.  Pull some data from your database into Excel, let someone "edit" it and then try to upload the changes.  It's always particularly (un)helpful if they color-code what changed, added new records or added/deleted a few fields.  
  5. Excel as a project management tool.  I may get some flack from this one as I know it's really popular but it seems to me that Excel is used just as a grid to layout tasks and timelines.  I can do that with a whiteboard.  Typically there is no calculation at all and if you want to tie tasks to resources or visualize slippage in tasks across time, this is not the place to start.
  6. Using Excel's "analytic" capabilities when you need something industrial strength.  I'm not a purist, you can use Excel's Solver and Data-Analysis tools quite effectively for smaller/simpler problems.  As size and complexity increase you may be able to use more sophisticated add-ins but at some point you will need to upgrade to a purpose-built tool to work effectively..
  7. Repeating the same "analysis" or "reporting" once per tab for 40 different brands (or factories or products or managers,...).  Seriously, there is no way you can stop errors creeping in.  You need a reporting or analytic tool that will generate these for you.
  8. Of course there are also the folks that use Excel as a word-processor, a presentation tool or even a grid to hold the numbers they produced on a calculator but that's really not a fault of the system is it?
My own personal favorite for the top spot is #2 at least until I see another example for one of the others :-)  

Which ones resonate for you?  Any other nominations?

Coming soon, a new series of posts around using the "Right Tool for the Job".









Recommended Reading: Supply Chain Network Design

I've done a lot of  supply chain network design projects and consider myself to be an expert. Had I had this book from the start, I may have got to expert status a lot faster.

With experience in supply-chain and an academic background that includes mathematical-optimization, when the need arose to build supply chain network optimization models I just did it.  Then I learned many, valuable, real-world lessons the hard way- by getting it wrong.

There are a number of books available that cover this area: I have dipped into a few, as needed, and I have not read most of them so I really can't say this is the best book available on the subject.  I can say that this is one of the very few analytic books on any subject that I have read cover to cover.  

Network design is perhaps not as hot a topic now as it was 10 years ago.  That's just my perception, but while the hype right now is around "big data", network design continues to deliver major savings to organizations.  Network design finds where your facilities should be and how product should flow through them to support your business at the lowest cost.  The more rapidly your business is changing the more often this is worthwhile: an acquisition or divestment will almost always justify the expense with a significant ROI.  A 10% reduction in supply chain cost is common..  Even on a stable business there can be significant saving (transportation, labor and warehousing) in adjusting product flow on a relatively frequent, annual basis.

Note that while the authors (all from IBM) have extensive experience building software products to help you do supply-chain network optimization this book is not a sales brochure for LogicNet, in fact, it's barely mentioned.

The math needed to run an optimization model is not simple but it is accessible to those who want to learn and this book does take you through step by step a mathematical programming model that gets increasingly sophisticated.  The necessary theory is all there.

What attracted me was that it goes beyond the theory and has lots of details around project execution:.  the need for sensitivity analysis; the difficulty of getting reliable transportation rates ; sensible data aggregation strategies; why you must have an optimized "baseline"; and numerous others.  These are all areas that analysts get wrong - as I did.  Some learn from the experience, others send out the results anyway.

Managers who hope to become better buyers/consumers of network-design projects (remembering that  your analysts may also be making newbie mistakes) skip the math sections and you can still understand what can be modeled and why you would want to.

For analysts actively involved in building optimization-models the mathematical formulations are extremely helpful. Even if you choose to build models with a software package that tries to hide the harder math from you, the guidance around data and the art of modeling is worth the price and the time to read it.

If you have a network design project in mind and a plane journey coming up - make the investment.  

Supply Chain Network Design: Applying Optimization and Analytics to the Global Supply Chain (FT Press Operations Management)

by Michael Watson, Sara Lewis, Peter Cacioppi and Jay Jayaraman (Sep 1, 20112)